TIP_2022_CMFSL icon indicating copy to clipboard operation
TIP_2022_CMFSL copied to clipboard

Problem Solved

Open jerryniu0624 opened this issue 1 year ago • 2 comments

I train with only KSC dataset. After making the ocbs pickle. This error arises.

Data is OK. ok train labels: tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8]) size of train datas: torch.Size([45, 103, 9, 9]) dict_keys(['data', 'Labels', 'set']) (9, 9, 103, 1800) [0 0 0 ... 8 8 8] (1800, 103, 9, 9) target data augmentation label: [0 0 0 ... 8 8 8] dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8]) (9, 9, 103, 1800) [0 0 0 ... 8 8 8] {'Total': 614006, 'Trainable': 614006} Training... Traceback (most recent call last): File "/mnt/sde/niuyuanzhuo/TIP_2022_CMFSL-main/CMFSL_UP_main.py", line 480, in support_features, _ = feature_encoder(supports.cuda()) # torch.Size([409, 32, 7, 3, 3]) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/sde/niuyuanzhuo/TIP_2022_CMFSL-main/CMFSL_UP_main.py", line 382, in forward feature = self.feature_encoder(x) # (45, 64) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/mnt/sde/niuyuanzhuo/TIP_2022_CMFSL-main/CMFSL_UP_main.py", line 337, in forward Z = F.relu(self.conv4(Z_TMP), inplace=True) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/container.py", line 139, in forward input = module(input) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/niuyuanzhuo/Anaconda3/envs/CMFSL/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 454, in _conv_forward self.padding, self.dilation, self.groups) RuntimeError: Given groups=1, weight of size [128, 528, 3, 3], expected input[9, 928, 4, 4] to have 528 channels, but got 928 channels instead

jerryniu0624 avatar Oct 13 '23 06:10 jerryniu0624

After I change self.conv4 = conv3x3(528, FEATURE_DIM) to self.conv4 = conv3x3(928, FEATURE_DIM) , training process is fine. However, testing process has the same issue, when testing on piavaU dataset.

jerryniu0624 avatar Oct 13 '23 06:10 jerryniu0624

I seem to know the reason. the band number between test dataset and train dataset is different

jerryniu0624 avatar Oct 13 '23 07:10 jerryniu0624