Tassilo Wald
Tassilo Wald
Seems like one of your workers dies due to too high memory consumption at inference time. Also it seems like you train with STU-Net which is a large architecture and...
Thanks for pitching in @ancestor-mithril! As ancestor-mithril suggested, decrease the worker numbers, so you don't run into these memory issues and monitor your RAM usage so you don't have to...
@xinglianglei Were you able to solve your problem?
@xinglianglei In case no update will be given, I will close this issue due to inactivity in the coming days.
You have to rebuild your `dataloader`. In nnU-Net dataloading happens through [batchgenerators](https://github.com/MIC-DKFZ/batchgenerators). So you just copy the dict with the transforms that you use, pass it to the dataloader and...
@Chandhinii-Techstudio Does the dataloading /visualizing still pose an issue to you?
Generally you'd want to show your model the same data distribution as you trained on. So you should preprocess your inference data the same way as you do for the...
I would recommend you to use HD-BET if you have an MRI dataset and Totalsegmentator when you have CT images. HD-Bet is likely better (with the right modalities) as it...
Hey @Peaceandmaths , there are a variety of reasons that this may happen. As you mentioned one reason could be **the shift of hospitals** between train/val and test-dataset. A few...
>If I understand correctly, the pseudo dice is not representative and is not supposed to be comparable with the val/test dice because of the 5 folds structure. We're using the...