UNetPlusPlus
UNetPlusPlus copied to clipboard
Task058 have RuntimeError: Given transposed=1, weight of size [128, 64, 1, 2, 2], expected input[2, 64, 16, 160, 160] to have 128 channels, but got 64 channels instead
I try to train Electron Microscopy dataset by use Task058_ISBI_EM_SEG.py, but occur RuntimeError: Given transposed=1, weight of size [128, 64, 1, 2, 2], expected input[2, 64, 16, 160, 160] to have 128 channels, but got 64 channels instead
UnetPlusPlus code has anly update ? Or need to modify something? Thank you very mach!
My command : python3.7 Task058_ISBI_EM_SEG.py nnUNet_plan_and_preprocess -t 058 --verify_dataset_integrity nnUNet_train 3d_fullres nnUNetPlusPlusTrainerV2 Task058_ISBI_EM_SEG 0 --npz
print logs: ############################################### I am running the following nnUNet: 3d_fullres My trainer class is: <class 'nnunet.training.network_training.nnUNetPlusPlusTrainerV2.nnUNetPlusPlusTrainerV2'> For that I will be using the following configuration: num_classes: 1 modalities: {0: 'EM'} use_mask_for_norm OrderedDict([(0, False)]) keep_only_largest_region None min_region_size_per_class None min_size_per_class None normalization_schemes OrderedDict([(0, 'nonCT')]) stages...
stage: 0 {'batch_size': 2, 'num_pool_per_axis': [2, 6, 6], 'patch_size': array([ 16, 320, 320]), 'median_patient_size_in_voxels': array([ 30, 512, 512]), 'current_spacing': array([50., 4., 4.]), 'original_spacing': array([50., 4., 4.]), 'do_dummy_2D_data_aug': True, 'pool_op_kernel_sizes': [[1, 2, 2], [1, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[1, 3, 3], [1, 3, 3], [1, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}
I am using stage 0 from these plans I am using sample dice + CE loss
I am using data from this folder: /UnetPlusPlus/UNetPlusPlus-master/UNetPlusPlus-master/pytorch/dataset/nnUNet_preprocessed/Task058_ISBI_EM_SEG/nnUNetData_plans_v2.1 ############################################### 14:21:02.764737: Using dummy2d data augmentation [[1, 2, 2], [1, 2, 2], [1, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]] loading dataset loading all case properties unpacking dataset done 6 6 256 2 <class 'nnunet.network_architecture.generic_UNetPlusPlus.ConvDropoutNormNonlin'> <class 'torch.nn.modules.conv.ConvTranspose3d'> 6 128 2 <class 'nnunet.network_architecture.generic_UNetPlusPlus.ConvDropoutNormNonlin'> <class 'torch.nn.modules.conv.ConvTranspose3d'> 6 4 5 weight_decay: 3e-05 14:21:06.468369: lr: 0.01 using pin_memory on device 0 using pin_memory on device 0 14:21:08.702853: Unable to plot network architecture: 14:21:08.703407: No module named 'hiddenlayer' 14:21:08.703701: printing the network instead:
14:21:08.703849: Generic_UNetPlusPlus( (loc0): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(640, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(960, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (2): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(1024, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (3): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(640, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (4): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(384, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (5): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(224, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (loc1): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(640, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(768, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (2): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(512, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (3): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (4): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(192, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (loc2): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(512, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(384, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (2): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (3): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(160, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (loc3): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(192, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (2): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (loc4): ModuleList( (0): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) (1): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(96, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (conv_blocks_context): ModuleList( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(1, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(32, 32, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(32, 64, kernel_size=[1, 3, 3], stride=[1, 2, 2], padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(64, 64, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (2): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(64, 128, kernel_size=[1, 3, 3], stride=[1, 2, 2], padding=[0, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(128, 128, kernel_size=[1, 3, 3], stride=(1, 1, 1), padding=[0, 1, 1]) (instnorm): InstanceNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (3): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(128, 256, kernel_size=[3, 3, 3], stride=[1, 2, 2], padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(256, 256, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (4): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(256, 320, kernel_size=[3, 3, 3], stride=[2, 2, 2], padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (5): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=[2, 2, 2], padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) (1): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (6): Sequential( (0): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=[1, 2, 2], padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) (1): StackedConvLayers( (blocks): Sequential( (0): ConvDropoutNormNonlin( (conv): Conv3d(320, 320, kernel_size=[3, 3, 3], stride=(1, 1, 1), padding=[1, 1, 1]) (instnorm): InstanceNorm3d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False) (lrelu): LeakyReLU(negative_slope=0.01, inplace=True) ) ) ) ) ) (td): ModuleList() (up0): ModuleList( (0): ConvTranspose3d(320, 320, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (1): ConvTranspose3d(320, 320, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False) (2): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False) (3): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (4): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (5): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) ) (up1): ModuleList( (0): ConvTranspose3d(320, 320, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False) (1): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False) (2): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (3): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (4): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) ) (up2): ModuleList( (0): ConvTranspose3d(320, 256, kernel_size=[2, 2, 2], stride=[2, 2, 2], bias=False) (1): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (2): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (3): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) ) (up3): ModuleList( (0): ConvTranspose3d(256, 128, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (1): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (2): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) ) (up4): ModuleList( (0): ConvTranspose3d(128, 64, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) (1): ConvTranspose3d(64, 32, kernel_size=[1, 2, 2], stride=[1, 2, 2], bias=False) ) (seg_outputs): ModuleList( (0): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (1): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (2): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (3): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) (4): Conv3d(32, 2, kernel_size=(1, 1, 1), stride=(1, 1, 1), bias=False) ) ) 14:21:08.713013:
14:21:08.713932:
epoch: 0
Traceback (most recent call last):
File "/python3.7.5/bin/nnUNet_train", line 11, in
hey,bro, I have the same problem. Have you solved it?
I have the same problem
I have the same problem
I found a possible solution to this, In my case (also, I think in your case), The error arose because the plan for this task includes 6 pooling stages and 7 convolutional stages. However, the UNetPlusPlus implementation seems to only support 5 pooling stages and 6 convolutional stages because it only has 5 up stages, it will cause mismatch when indexing.
The possible solution I did was to use a custom planner derived from ExperimentPlanner3D_v2
and set the self.unet_max_numpool = 5
while calling nnUNet_plan_and_preprocess
I don't know if this is a generic solution and whether it will impair the model performance,
Please let me know if you guys found a better solution.
I also have the same problem @MenxLi would you mind giving a more detailed explanation of how you fixed it? I am trying to follow your steps but I get the following error "RuntimeError: Could not find the Planner class MyExperimentPlanner_v21. Make sure it is located somewhere in nnunet.experiment_planning". I have made my custom planner like so:
from copy import deepcopy
import numpy as np
from nnunet.experiment_planning.experiment_planner_baseline_3DUNet_v21 import \
ExperimentPlanner3D_v21
from nnunet.experiment_planning.common_utils import get_pool_and_conv_props
from nnunet.paths import *
from nnunet.network_architecture.generic_modular_residual_UNet import FabiansUNet
class MyExperimentPlanner_v21(ExperimentPlanner3D_v21):
def __init__(self, folder_with_cropped_data, preprocessed_output_folder):
super(ExperimentPlanner3DFabiansResUNet_v21, self).__init__(folder_with_cropped_data, preprocessed_output_folder)
self.unet_max_numpool = 5
This I have then saved in the experiment_planning folder. If you can help me I would greatly appreciate your help!
In the end I have solved the issue by using smaller resolution images. Using smaller images the algorithm will determine to use fewer pooling layers and thus the error goes away.
In the end I have solved the issue by using smaller resolution images. Using smaller images the algorithm will determine to use fewer pooling layers and thus the error goes away.
It's great to hear that it works, as long as you can have fewer pooling layers, it will work. The new planner class should be placed in nnunet/experiment_planning folder, did you install the nnunet using pip install with -e option (experiment mode)? only if you install it with experiment mode, the packages will remain in place and not be copied to site-packages folder, so that your changes can be found by the interpreter.
Yes you are right I probably haven't installed it with the -e option, I didn't even know about it ahah :)
new planner class should be placed in nnunet/experiment_planning folder
could you please tell me how to derive a custom planner from ExperimentPlanner3D_v2, I need help! I would appreciate it if you could tell me