3DUnetCNN
3DUnetCNN copied to clipboard
請問已經訓練出模型出來,該如何去測試模型分割是否理想?
You can generate predictions using predict.py and then create a notebook or script to load the images with nibabel/numpy and generate metrics against the ground truth. Monai has some metrics you can use: https://docs.monai.io/en/stable/metrics.html
抱歉我描述錯誤,其實我是想去測試資料集中其中一個nii檔,使用predict.py分割可以得到分割後的nii檔案或圖片。但我不太懂predict.py的參數 python D:\3DUnetCNN\unet3d\scripts\predict.py --output_directory test_seg --config_filename brats2020_config.json --model_filename brats2020_config\fold5\model_best.pth --group test\BraTS20_Training_048_t1.nii
The filenames in the configuration file are split up into different "groups", typically: "training", "validation", and sometimes "test". If you are using the BRATS 2020 example, I think you should use "validation" as your group.
So the command should look like:
python D:\3DUnetCNN\unet3d\scripts\predict.py --output_directory test_seg --config_filename brats2020_config\fold5\config.json --model_filename brats2020_config\fold5\model_best.pth --group validation
Let me know if that works and I will add it to the tutorial.
不好意思,我找不到brats2020_config\fold5\config.json這個檔案
What was the training command that you ran?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. If you are still wanting followup to this issue, please ping the thread by leaving a comment. You may also contact [email protected] with questions.
sorry to bother,you. I am using my own dataset and commands you mentioned, but don't get a normal 0-1 segmentation but a result like this below, I wonder why this happened (the command I use:python -m unet3d.scripts.predict --output_directory data_core\predicted_t --config_filename E:\Projects\TabularSeg_FiLM\3DUnetCNN\examples\CTPseg\CTPsegCore_config\fold1.json --model_filename CTPseg\CTPsegCore_config\fold1\model_best.pth --group test)
,
The predict.py script writes the output from your model which will be logit values. Using the predict.py script you can specify for it to pass the logit values through an activation function such as "sigmoid" or "softmax" before writing the predictions to file. Then to get binary values you can load the images in python using nibabel and use numpy to apply a threshold to the values (i.e. > 0.5) to get binary segmentation values.
Thanks a lot!
Here is what I did: I put my test set filenames in configuration json file ,"test_filenames". when the training is done, I get the test results.