apumonline
apumonline
作者,你好!我在运行推理函数step3_TestOrInference.py时,将fold_flag设为False后模型预测结果直接输出为0。但是当我将flod_flag设为True时,模型可以输出结果。会有预测结果图像生成。  下面是我更改的代码部分: img_path = r'tn3k_dian2\tn3k_point\test-image' mask_path = r'tn3k_dian2\tn3k_point\test-mask' csv_file = r'tn3k_dian2\output.csv' save_path = r'inference' # 是否使用设定好的分折信息 fold_flag =False # 预测测试集则设置为False直接读取img_path中PNG文件进行测试,True则使用分折信息 使用的模型权重文件也存在: weight_c1=r'result\TNSCUI\dpv3plus_stage1_5_1\models\epoch400_Testdice0.0000.pkl' weight_c2=r'result\TNSCUI\dpv3plus_stage2_5_1\models\epoch400_Testdice0.0000.pkl' 请问问题出在哪里?
你好,作者!我今天使用step3_TestOrInference.py函数进行预测时,Image.open(mask_file)函数出错。报错内容如下: File "f:/liuxiao/TNSCUI2020-Seg-Rank1st-master/step2to4_train_validate_inference/step3_TestOrInference.py", line 202, in GT = Image.open(mask_file) File "F:\miniconda\envs\TNSCUI_Rank1\lib\site-packages\PIL\Image.py", line 3218, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: 'tn3k_dian2\\tn3k_point\\test-mask\\0050' 预测数据集我放在在项目运行主目录下,应该不是由于没有使用绝对路径问题。我在训练模型时,读取图像的路径也使用的是相对路径。我猜想是不是由于Iamge.open函数的问题?
Hello, author, I made a mistake in training. What is the specific reason for the error? python scripts/segmentation_train.py --data_name ISIC --data_dir F:\liuxiao\project\dataset\isbi_3b_medsegdiff --out_dir F:\liuxiao\project\MedSegDiff\outdir --image_size 256 --num_channels 128 --class_cond False...
Hello author, I ran the train.py file with an error. The following is the specific content of the error: What I entered: python train.py -net sam -mod sam_adpt -exp_name medical-sam-ada...
"Hello, author. Your work on deformable DLKA is very impressive. However, we seem to have found an issue: in your paper, the DSC for the Synapse using the nnformer is...
Hello, author! I would like to ask, in the PyTorch implementation of EfficientNetV2, is there a pre-trained model implemented on ImageNet1k? Looking forward to your answer.