Tassilo Wald
Tassilo Wald
If any of you is interested in creating a pull-request e.g. like a "SmallInstanceTrainer" that uses a different sampling strategy it would be greatly appreciated 😄 🚀
Also as previously mentioned, the inflated pseudo-dice numbers can be fixed (and maybe your overall model performance increased) by changing the sampling foreground sampling strategy of the patches. By not...
Sorry for the late answer. Whenever you run into dataloading issues like this you should try to disable your multiprocessing to receive a clearer error message than this obfuscated stack-trace....
nnU-Net is not aimed at classification and currently there is no goal to integrate this. @KanielDatz 's recommendation is your best way about it, but the training pipeline for this...
Generally spacings are entered in millimeters, but in principle there is nothin stopping you from choosing a consistent other scale. (But I would not recommend since it is bad practice)...
Closing due to inactivity
Seems like something is broken with your `.npy` files. You should probably delete the currently preprocessd data and restart the preprocessing and training to verify that all npzs contain valid...
Closing due to inactivity
Hey @karllandheer, currently nnU-Net does not support longitudinal images, but you @mrokuss and @ykirchhoff may be able to help you out in how to use longitudinal data
@karllandheer This is the associated repo. Feel free to open an issue there to get @mrokuss and @ykirchhoff to fill it 😉 https://github.com/MIC-DKFZ/Longitudinal-Difference-Weighting