Labels4Free
Labels4Free copied to clipboard
If I can use this project for grayscale images
What modifications should be done for generate grayscale images? File "C:\Users\wu.conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1482, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Generator: size mismatch for to_rgb1.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgb1.conv.weight: copying a param with shape torch.Size([1, 1, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for to_rgbs.0.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.0.conv.weight: copying a param with shape torch.Size([1, 1, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for to_rgbs.1.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.1.conv.weight: copying a param with shape torch.Size([1, 1, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for to_rgbs.2.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.2.conv.weight: copying a param with shape torch.Size([1, 1, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for to_rgbs.3.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.3.conv.weight: copying a param with shape torch.Size([1, 1, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]). size mismatch for to_rgbs.4.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.4.conv.weight: copying a param with shape torch.Size([1, 1, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]). size mismatch for to_rgbs.5.bias: copying a param with shape torch.Size([1, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 1, 1]). size mismatch for to_rgbs.5.conv.weight: copying a param with shape torch.Size([1, 1, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
It seems like you are trying to load a model with 1 channel and the pretrained model is 3 channels. You might need to change the output channels of the to_rgb layers. Then you can retrain the alpha network on your model.