CodeHarcourt
CodeHarcourt
I’m currently meeting a problem which is I followed the READNE.md, but I found that when I use the command 'nnUNet_plan_and_preprocess -t 500', the terminal shows nnUNet_plan_and_preprocess: command not found....
Thank you for your help! But now I meet a new problem, which is `nnUNetv2_plan_and_preprocess -d 500 Fingerprint extraction... Traceback (most recent call last): File "/home/nas/.local/bin/nnUNetv2_plan_and_preprocess", line 8, in sys.exit(plan_and_preprocess_entry())...
Thank you for being so helpful. Now I can use the command `nnUNetv2_plan_and_preprocess -d 500 --verify_dataset_integrity`, but there is another problem which is `RuntimeError: Some segmentation images contained unexpected labels....
> This means that `BRATS_360.nii.gz` has floating point labels instead of integers. You can solve this problem by loading and saving the segmentation mask again after casting it to `np.uint8`....
Thank you. I run your and then find that the data type no change. It is float64. I change my code. Now it is `import os import nibabel as nib...
Ok. I use two different methods to change the data type. And then combine them to one folder. I solve this problem.Thank you very much. Now I begin to train...
Yeah, bro. I meet a new problem which is the train_loss and the val_loss. They are too high. The train_loss and val_loss synchronous rise. This means the dataset may need...
I try to train the BraTS 2020 Dataset in this model. And when I use the native dataset to train the model. There is a problem which is the tensor...
Are your train_loss and val_loss Synchronous ascent? You just train 5 epochs. Can you show more epoch results? When I train 50 epochs I find that the train_loss and val_loss...
Have you solved this problem? Can you show me your train_loss and val_loss?