nipype
nipype copied to clipboard
ENH: interface for whitestripe normalization
Summary
A processing pipeline for MS usually follows these preprocessing steps:
- N4 correction
- Registration
- Skull stripping
- Intensity normalization
nipype provides interfaces for tools for all these steps except intensity normalization. The WhiteStripe method is especially well suited for MS brains as lesions can skew mean/variance.
List of changes proposed in this PR (pull-request)
- Added WhiteStripe interface
Acknowledgment
- [x] (Mandatory) I acknowledge that this contribution will be available under the Apache 2 license.
Codecov Report
Merging #3351 (3a64072) into master (0289137) will decrease coverage by
0.03%
. The diff coverage is43.10%
.
@@ Coverage Diff @@
## master #3351 +/- ##
==========================================
- Coverage 65.08% 65.05% -0.04%
==========================================
Files 307 308 +1
Lines 40362 40420 +58
Branches 5329 5333 +4
==========================================
+ Hits 26270 26295 +25
- Misses 13019 13050 +31
- Partials 1073 1075 +2
Flag | Coverage Δ | |
---|---|---|
unittests | 64.82% <43.10%> (+0.01%) |
:arrow_up: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
nipype/interfaces/r.py | 83.33% <ø> (ø) |
|
nipype/interfaces/whitestripe.py | 43.10% <43.10%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 0289137...3a64072. Read the comment docs.
@effigies It looks like I have some more work to fix the failings CI tests, but I'm wondering if this is something you'd accept into nipype in the first place?
Absolutely! Please let me know if you want feedback before working on the tests, or if you want to get things fixed up and then get feedback.
@effigies I’d love some feedback before working on the tests as I’m not sure how stringent they should be. Right now the test runs whitestripe on a nifti downloaded from a public dataset, then simply checks the output file got created. A more stringent test might actually compare the intensities of the normalized vs original image, but I’m not sure whether our tests should be for whitestripe itself, or just testing that the interface generates the correct command-line string. I think a robust test would actually run whitestripe (maybe in addition to checking the command-line string?), but that’s complicated by the fact that neither R nor the whitestripe R package are installed in the official nipype docker image. As the whitestripe interface isn’t usable unless whitestripe is installed, perhaps as a first step I should submit a PR to create (perhaps a separate?) container image with R and whitestripe installed?
I’m not sure whether our tests should be for whitestripe itself, or just testing that the interface generates the correct command-line string
Generally nipype concerns itself with the command-line, if we're not implementing the algorithms ourselves. There may be some rare cases where we do exercise the underlying tools, but that's typically going to be when the operations are pretty cheap.
As the whitestripe interface isn’t usable unless whitestripe is installed, perhaps as a first step I should submit a PR to create (perhaps a separate?) container image with R and whitestripe installed?
If you want to install R and whitestripe, then go ahead and do it in this PR.
@effigies It seems one of the checks is failing because the auto test, nipype/interfaces/tests/test_auto_WhiteStripe.py
doesn't exist. I tried running make specs
but that didn't seem to generate the file? I also tried running everything with just make
and got a segmentation fault from sphinx, not sure if that's just a problem with my environment though.
But otherwise I rewrote the test (which passes for me locally) and added R and whitestripe to the nipype docker image. One thing to note is the main image is now 6.98GB
on my machine, I'm not sure how much of a size reduction you get by compressing layers afterwards but R is a hefty dependency and so are the neuroimaging packages required by whitestripe. Maybe it would make sense to offer a "slim" version of the image without R, and a "full" version with everything?
I ran make specs
and pushed the new file to 3a64072779ce6ca077abde3ef80249f0e68a8f65. If you fetch from the main repository, you should be able to merge it into your branch.
@effigies looks like all the tests are passing now.
On a somewhat unrelated note, I wanted to get your input / approval on another interface I wanted to add for the mimosa lesion segmentation method. Since the training requires manually segmented images I imagine nearly everyone will use the built-in models my lab has trained, but I think including the training step in the interface could still be useful — but should the training and test parts be bundled into the same interface, or should there be two interfaces for training new model / using that model to test new data?
I also wanted to write a tutorial (like the ones here) showing how to segment MS lesions by building a nipype workflow that uses ANTS for bias correction / skull striping, whitestripe for normalization and then finally mimosa. I’m a bit confused where the source files are for the documentation are though — I looked through the docs but don’t see the many articles listed under the User Guide and Examples section of the website?