Ole Johannsen
Ole Johannsen
Hello, Its this question still relevant? Otherwise, I would close the issue. Cheers Ole
did the mentioned thread help resolve your problem?
And just to make sure: are you running the most current version of nnUNet + threadpoolctl (maybe including reinstalling it)?
Hey, which command are you running precisely so I can try to reproduce the problem? Cheers Ole Johannsen
Hello, sorry for the late response. Its this question still relevant? Otherwise, I would close the issue. Cheers Ole
Hello, sorry for the late response! Is this question still relevant? To me it looks the loss might still decrease if training is continued. In what way did the loss...
There are a couple of predefined trainers with more epochs, see https://github.com/MIC-DKFZ/nnUNet/blob/b4e97fe38a9eb6728077678d4850c41570a1cb02/nnunetv2/training/nnUNetTrainer/variants/training_length/nnUNetTrainer_Xepochs.py You can invoke these trainers using the -tr flag, e.g. nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD -tr nnUNetTrainer_8000epochs
Hello, Did you manage to resolve the problem or do you need further assistance? Otherwise, I will close the issue as its quite dated. Cheers Ole
Indeed, all pixels brighter than T are thresholded to T, where T is the 99.5% quantile. Analogous for the lower bound. This will only affect very few pixels and makes...
Hello, sorry for the late response. Its this question still relevant? Otherwise, I would close the issue. Cheers Ole