a problem in training
I started to create the dataset and The pixels of labels had values 0.0 for background, 63.0 for liver, 126.0 for spleen,189.0 left kidney, 252.0 right kidney, So i turn them to 0,1,2,3,4 scalars by code. So the Preprocess phase done correctly without any issue. But in training phase, I get this error!:
File "/usr/local/bin/nnUNet_train", line 8, in
It confused me!
It seems like you still have values in your segmentation that are not supposed to be there. Please run nnUNet_plan_and_preprocess with the --verify_dataset_integrity flag
Thank you Fabian for reply. I did that before and It seems Ok. Verifying training set checking case IMG_001 checking case IMG_002 checking case IMG_003 checking case IMG_005 checking case IMG_008 checking case IMG_010 checking case IMG_013 checking case IMG_015 checking case IMG_019 checking case IMG_020 checking case IMG_021 checking case IMG_022 checking case IMG_031 checking case IMG_032 checking case IMG_033 checking case IMG_034 checking case IMG_036 checking case IMG_037 checking case IMG_038 checking case IMG_039 Verifying label values Expected label values are [0, 1, 2, 3, 4] Labels OK Dataset OK IMG_001 IMG_002 IMG_003 IMG_005 IMG_008 IMG_010 IMG_013 IMG_015 IMG_019 IMG_020 IMG_021 IMG_022 IMG_031 IMG_032 IMG_033 IMG_034 IMG_036 IMG_037 IMG_038 IMG_039
hm in that case it's really hard to say. Would you be able to share your dataset?>
The problem solved dear Fabian. I deleted nnUNet_cropped_data folder and rerun the preprocessing stage. but now the new problem is the training stuck in epoch 0.
2022-03-09 10:29:04.543689: epoch: 0 /usr/local/lib/python3.7/dist-packages/torch/autocast_mode.py:141: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling warnings.warn('User provided device_type of 'cuda', but CUDA is not available. Disabling')
The dataset is here: https://github.com/RezaRazmara/segmentation_dataset thanks for your help
Hi, your GPU ist not set up correctly. I cannot help you with that - please google the issue :-) There are plenty of threads about this already
Hey @RezaRazmara
Did you manage to get the training running on your GPU or do you have any follow-up questions? Otherwise I would close this issue.