brain-segmentation-pytorch
brain-segmentation-pytorch copied to clipboard
in_channels parameter change causes size mismatch
When changing the kwarg (in_channels
) from 3 to 2, like
torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
in_channels=2, out_channels=1, init_features=32, pretrained=True)
this error occurs:
...module.py", line 1044, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for UNet:
size mismatch for encoder1.enc1conv1.weight: copying a param with shape torch.Size([32, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 2, 3, 3]).
Hi, this is correct, the model was trained on 3-channel input images.
Trained weights can only be loaded for a model initialized with in_channels=3
.
Hi, how did you make this 3 channel images?
Image channels correspond to 3 MRI sequences: pre-contrast, FLAIR, and post-contrast. More detail in the paper: https://arxiv.org/abs/1906.03720
I see you have some matlab functions for preprocessing. Are there any written for python?
All preprocessing steps are implemented in python in this repo.