qsiprep icon indicating copy to clipboard operation
qsiprep copied to clipboard

Exception: Gradients or original files don't match. File a bug report!

Open gsudre opened this issue 2 years ago • 18 comments

Hi! I'm getting this error when running qsiprep 0.16.1 on my data. The two DWI sequences have 45 volumes each, one is AP and the other PA direction. There are no specific fmaps, and there are 2 T1ws being averaged. I was hoping to use TOPUP/PEPOLAR for SDC, so I'm wondering if there isn't anything weird in the DWI JSONs that'd be causing issues when combining the data down the road?

Here's my call to qsiprep:

qsiprep ${TMPDIR}/BIDS/ ${tmp_out_dir} participant \
    --participant_label sub-${s} -w ${wrk_dir} \
    --notrack --nthreads $SLURM_CPUS_PER_TASK \
    --mem_mb $(( SLURM_MEM_PER_NODE - 2048 )) \
    --stop-on-first-crash \
    --unringing-method mrdegibbs \
    --output-resolution 1.2 \
    --template MNI152NLin2009cAsym \
    --distortion-group-merge average \
    --fs-license-file /usr/local/apps/freesurfer/license.txt

Here's how one of the DWI jsons look like: (the other one is similar, but PhaseEncodingDirection: j-)

(dcm2niix) [sudregp@ncrshell01 sub-test4]$ cat ses-baseline/dwi/sub-test4_ses-baseline_dir-AP_run-1_dwi.json 
{
  "Modality": "MR",
  "MagneticFieldStrength": 3,
  "ImagingFrequency": 127.824,
  "Manufacturer": "GE",
  "InternalPulseSequenceName": "EPI2",
  "ManufacturersModelName": "DISCOVERY MR750",
  "InstitutionName": "NIH FMRIF",
  "DeviceSerialNumber": "000301496MR3T5MR",
  "StationName": "fmrif3ta",
  "BodyPartExamined": "HEAD",
  "PatientPosition": "HFS",
  "ProcedureStepDescription": "MRI Brain",
  "SoftwareVersions": "27\\LX\\MR Software release:DV26.0_R03_1831.b",
  "MRAcquisitionType": "2D",
  "SeriesDescription": "EDTI_2mm_MB2_cdif45_AP",
  "ProtocolName": "[XT-ID:20-HG-0147]_NCR_3",
  "ScanningSequence": "EP\\RM",
  "SequenceVariant": "NONE",
  "ScanOptions": "CL_GEMS\\SAT_GEMS\\EDR_GEMS\\EPI_GEMS\\HYPERBAND_GEMS\\PFF\\FS",
  "PulseSequenceName": "edti",
  "ImageType": [
    "ORIGINAL",
    "PRIMARY",
    "OTHER"
  ],
  "SeriesNumber": 14,
  "AcquisitionTime": "14:50:55.000000",
  "AcquisitionNumber": 1,
  "SliceThickness": 2,
  "SpacingBetweenSlices": 2,
  "SAR": 0.410538,
  "EchoTime": 0.088,
  "RepetitionTime": 5.826,
  "FlipAngle": 90,
  "PhaseEncodingPolarityGE": "Flipped",
  "ShimSetting": [
    3,
    3,
    -17
  ],
  "PrescanReuseString": "RN/s7",
  "CoilString": "32Ch Head",
  "MultibandAccelerationFactor": 2,
  "PercentPhaseFOV": 100,
  "PercentSampling": 100,
  "AcquisitionMatrixPE": 110,
  "ReconMatrixPE": 128,
  "EffectiveEchoSpacing": 0.000601323,
  "TotalReadoutTime": 0.076368,
  "PixelBandwidth": 3906.25,
  "PhaseEncodingDirection": "j",
  "ImageOrientationPatientDICOM": [
    1,
    -0,
    0,
    -0,
    1,
    0
  ],
  "InPlanePhaseEncodingDirectionDICOM": "COL",
  "ConversionSoftware": "dcm2niix",
  "ConversionSoftwareVersion": "v1.0.20220720",
  "MultipartID": "dwi_1"
}

(I added MultipartID manually to see if it'd help the issue, but it didn't).

The crash log is attached crash-20221223-182440-sudregp-concat-e65edb77-6e5c-4b05-9e24-ecb7ac025533.txt. And for completeness, here's the initial qsiprep output:

This dataset appears to be BIDS compatible.
        Summary:                  Available Tasks:        Available Modalities: 
        18 Files, 197.13MB                                T1w                   
        1 - Subject                                       dwi                   
        1 - Session                                       bold                  


        If you have any questions, please post on https://neurostars.org/tags/bids.

221223-16:28:01,860 nipype.workflow INFO:         Running with omp_nthreads=8, nthreads=32
221223-16:28:01,860 nipype.workflow IMPORTANT:
         
    Running qsiprep version 0.16.1:
      * BIDS dataset path: /lscratch/54848062/31942/BIDS.
      * Participant list: ['test4'].
      * Run identifier: 20221223-162801_14cbc70e-5f4a-4537-80a6-cd3e18659b17.
    
221223-16:28:02,847 nipype.workflow INFO:         Combining all dwi files within each available session:
221223-16:28:02,847 nipype.workflow INFO:
                - 2 scans in session baseline
221223-16:28:02,861 nipype.workflow INFO:
         [{'dwi_series': ['/lscratch/54848062/31942/BIDS/sub-test4/ses-baseline/dwi/sub-test4_ses-baseline_dir-AP_run-1_dwi.nii.gz'], 'dwi_ser
ies_pedir': 'j', 'fieldmap_info': {'suffix': 'rpe_series', 'rpe_series': ['/lscratch/54848062/31942/BIDS/sub-test4/ses-baseline/dwi/sub-test4_
ses-baseline_dir-PA_run-1_dwi.nii.gz']}, 'concatenated_bids_name': 'sub-test4_ses-baseline_run-1'}]221223-16:28:02,924 nipype.workflow IMPORTANT:
         Creating dwi processing workflow "dwi_preproc_ses_baseline_run_1_wf" to produce output sub-test4_ses-baseline_run-1 (1.05 GB / 55 DWI
s). Memory resampled/largemem=1.21/1.26 GB.

Let me know if you need more details!

Thanks,

Gustavo

gsudre avatar Dec 23 '22 23:12 gsudre

Hi,

You can enclose inline code blocks with triple backquotes. The crash report should have been saved to a txt file, which you can drag and drop as an attachment.

If you're willing to experiment, could you try removing --distortion-group-merge average?

cookpa avatar Dec 24 '22 00:12 cookpa

Thanks! I updated my initial post, and gave a try without --distortion-group-merge average. Same error :(

gsudre avatar Dec 24 '22 01:12 gsudre

@cookpa Are there any other tests I should run, or maybe you'd like to see other logs files? Thanks!

gsudre avatar Dec 28 '22 19:12 gsudre

I'm going to run a test to see if I can reproduce

cookpa avatar Dec 29 '22 00:12 cookpa

I ran some HCP data without problems on 0.16.1, so I think it's something particular to your data.

I'm not sure what to suggest at this point. You could look at the confound files mentioned in the crash log to see if there's any clues there.

If you can share example data, I will run it and investigate further.

cookpa avatar Dec 30 '22 18:12 cookpa

I think I figured out what's going on. I was checking out the comparison that throws the exception in line 178 of confounds.py, and my bval column is clearly different:

[sudregp@cn0944 concat]$ awk -F',' '{print $14,$11,$12,$13}' confounds_data.csv | head                                                    
bval grad_x grad_y grad_z
0.0 0.0 0.0 0.0199.0 1.0 0.0 0.0
199.0 0.44627 0.8949 0.0
199.0 0.44591 0.27459 0.85192
0.0 0.0 0.0 0.0
499.0 1.0 0.0 0.0
499.0 0.44662 0.89472 0.0
499.0 0.44657 0.27447 0.851611100.0 1.0 0.0 0.0
[sudregp@cn0944 concat]$ awk -F',' '{print $15,$16,$17,$18}' denoising.csv | head                                                         
original_bval original_bx original_by original_bz
0.0 0.0 0.0 0.0
199.624 1.0 0.0 0.0
199.387 0.44627 0.8949 0.0
199.714 0.44591 0.27459 0.851920.0 0.0 0.0 0.0
499.704 1.0 0.0 0.0
499.631 0.44662 0.89472 0.0
499.732 0.44657 0.27447 0.85161
1100.0 1.0 0.0 0.0

I haven't tested it yet, but I'm assuming that if I round my bval file to the nearest integer it should all work. Maybe the data is being converted to integer somewhere along the line, and then when recast to float it fails the comparison check?

In any case, thanks for your help!

gsudre avatar Dec 31 '22 01:12 gsudre

That's an interesting lead, thanks!

cookpa avatar Dec 31 '22 01:12 cookpa

Yep, I think that's the issue. I reckon if you round the bvals to integers in the BIDS dwi/ directory, it will work.

@mattcieslak it looks like bvals getting truncated to integers is causing problems here

https://github.com/PennLINC/qsiprep/blob/7603d66c12039278905c467b5424a0f998bf0ca1/qsiprep/interfaces/dwi_merge.py#L426-L430

cookpa avatar Dec 31 '22 01:12 cookpa

bvals also saved as ints here

https://github.com/PennLINC/qsiprep/blob/e896d0b432baef93fbf2bfddf6bb1eae45c6beff/qsiprep/interfaces/gradients.py#L668

and here

https://github.com/PennLINC/qsiprep/blob/e896d0b432baef93fbf2bfddf6bb1eae45c6beff/qsiprep/interfaces/gradients.py#L108

cookpa avatar Dec 31 '22 15:12 cookpa

wow, this is a great bug find. Does the BIDS validator say that your bval files are valid?

mattcieslak avatar Jan 02 '23 16:01 mattcieslak

@mattcieslak that's a great question. The BIDS spec is not clear on this. It explicitly says bvecs should be floating point, but doesn't say either way for bvals, neither does the linked FSL page

https://bids-specification.readthedocs.io/en/stable/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#required-gradient-orientation-information

cookpa avatar Jan 02 '23 16:01 cookpa

That section of the bids spec is also wrong about how fsl orients bvecs. Probably worth a fresh PR

mattcieslak avatar Jan 02 '23 16:01 mattcieslak

wow, this is a great bug find. Does the BIDS validator say that your bval files are valid?

The BIDS validator was fine with it. Not even a warning. I don't understand the DWI physics enough to know if bvals are always integers, so it'd be a moot point. But I'm not sure why that should be a general limitation? Unless any of the software in the pipeline requires to read in an integer, but I don't think that's the case.

gsudre avatar Jan 02 '23 17:01 gsudre

I don't think there's any reason there shouldn't be floats in the bvals, except that it reduces readability.

In FSL, bvals are required to have units of s/mm^2. In these units, I can't think of a need for floating point precision, because the difference between b=1000 and b=1000.5 is negligible. Maybe it would matter for IVIM or other techniques that use very small b-values, but even then it might not be worthwhile because of physical limitations on the precision of the computed b-values.

Edit: I previously said dcm2niix writes integers, but it produces floats, at least for Philips data #600

cookpa avatar Jan 02 '23 17:01 cookpa

It's probably better to be inclusive rather than exclusive. How many decimal places do you think we should use to represent bvals internally @cookpa?

mattcieslak avatar Jan 02 '23 20:01 mattcieslak

Im not sure, maybe 3?

IMO, writing them out with a fixed number of decimals would be best, eg 1000.000 2000.120 3000.234 rather than 1000.0 2000.12 3000.234.

cookpa avatar Jan 02 '23 21:01 cookpa

@gsudre does --distortion-group-merge concat work?

mattcieslak avatar Mar 21 '23 17:03 mattcieslak

@mattcieslak no, I get the same error. It does go through if I used a rounded/integer version of bvals though.

gsudre avatar Apr 04 '23 14:04 gsudre