H-vmunet icon indicating copy to clipboard operation
H-vmunet copied to clipboard

IndexError: index 1 is out of bounds for axis 1 with size 1

Open liubaoning111 opened this issue 10 months ago • 10 comments

Why does this happen? please,thank you Traceback (most recent call last): File "train.py", line 187, in main(config) File "train.py", line 142, in main loss = val_one_epoch( File "/home/liubn/0-149liubaoning/30-H-vmunet-main/H-vmunet-main/engine.py", line 85, in val_one_epoch TN, FP, FN, TP = confusion[0,0], confusion[0,1], confusion[1,0], confusion[1,1] IndexError: index 1 is out of bounds for axis 1 with size 1

liubaoning111 avatar Mar 29 '24 09:03 liubaoning111

Hi, I hope I can help you. Based on your error message, would like to ask if you have changed 'num_classes' or if you are using a segmentation task where the data is multiclassified. This is causing the number of output channels to be more than 1.

wurenkai avatar Mar 29 '24 11:03 wurenkai

If you want to do a multi-categorization segmentation task, you can refer to the framework of the previous outstanding VM-UNet work, replace VM-UNet with H-vmunet, and change the output of H-vmunet 'return torch.sigmoid(out0)', to 'return out0', and it will work fine using train_synapse.py.

wurenkai avatar Mar 29 '24 11:03 wurenkai

Thank you very much for your reply, which can help me with the multi-objective segmentation task. However, the data set I am using this time is ISIC-2017 and does not have multiple labels

------------------ 原始邮件 ------------------ 发件人: "wurenkai/H-vmunet" @.>; 发送时间: 2024年3月29日(星期五) 晚上7:32 @.>; @.@.>; 主题: Re: [wurenkai/H-vmunet] IndexError: index 1 is out of bounds for axis 1 with size 1 (Issue #2)

If you want to do a multi-categorization segmentation task, you can refer to the framework of the previous outstanding VM-UNet work, replace VM-UNet with H-vmunet, and change the output of H-vmunet 'return torch.sigmoid(out0)', to 'return out0', and it will work fine using train_synapse.py.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

liubaoning111 avatar Mar 29 '24 11:03 liubaoning111

I just re-updated 'Prepare_ISIC2017' to add some notes. You try re-preparing the ISIC2017 dataset while keeping 'num_classes=1' in 'config_setting.py' during training.

wurenkai avatar Mar 29 '24 11:03 wurenkai

I just re-updated 'Prepare_ISIC2017' to add some notes. You try re-preparing the ISIC2017 dataset while keeping 'num_classes=1' in 'config_setting.py' during training.

Hello, I have the same problem. At 30 rounds of model training, this problem arose. After debug, it was found that the shape of the confusion was [1,1], not two-dimensional data, resulting in the index being out of range. I hope you can help me solve this problem after you see it

ai-JIE avatar Apr 06 '24 14:04 ai-JIE

@ai-JIE Hi, training at 30epoch is right into the first model validation phase. I think it might still be the case that the validation set was not loaded correctly into the '.npy' file during data preparation. You can have a look at the detailed data preparation format given by Ultralight VM-UNet and Ultralight VM-UNet issue. The data preparation is the same for both projects.

wurenkai avatar Apr 06 '24 14:04 wurenkai

@ai-JIE In particular, you can preprocess correctly with the 'Prepare your own dataset' step. D. Prepare your own dataset

  1. The file format reference is as follows. (The image is a 24-bit png image. The mask is an 8-bit png image. (0 pixel dots for background, 255 pixel dots for target))
  • './your_dataset/'
    • images
      • 0000.png
      • 0001.png
    • masks
      • 0000.png
      • 0001.png
    • Prepare_your_dataset.py
  1. In the 'Prepare_your_dataset.py' file, change the number of training sets, validation sets and test sets you want.
  2. Run 'Prepare_your_dataset.py'.

wurenkai avatar Apr 06 '24 15:04 wurenkai

@ai-JIE In particular, you can preprocess correctly with the 'Prepare your own dataset' step. D. Prepare your own dataset

  1. The file format reference is as follows. (The image is a 24-bit png image. The mask is an 8-bit png image. (0 pixel dots for background, 255 pixel dots for target))
  • './your_dataset/'

    • images

      • 0000.png
      • 0001.png
    • masks

      • 0000.png
      • 0001.png
    • Prepare_your_dataset.py

  1. In the 'Prepare_your_dataset.py' file, change the number of training sets, validation sets and test sets you want.
  2. Run 'Prepare_your_dataset.py'.

Thank you for your answer. I have solved the problem. It's really the data preparation phase that's the problem.

ai-JIE avatar Apr 06 '24 15:04 ai-JIE

如果我想使用自己数据集进行训练,标注格式使用txt还是xml更好啊

xhlho avatar Apr 07 '24 03:04 xhlho

@xhlho Hi, when preparing your own dataset we recommend the method of generating a '.json' file format. Then convert to png format by following code.

import os
import os.path as osp
import json
import base64
import numpy as np  
from PIL import Image
from labelme import utils
from skimage import img_as_ubyte

json_file = ""  # The address of the folder where the json is stored.
list_path = os.listdir(json_file)

mask_dir = osp.join(osp.dirname(json_file), "png_masks")  # The name of the folder where the png is stored in the same path.
if not osp.exists(mask_dir):
    os.mkdir(mask_dir)

for i in range(0, len(list_path)):
    path = os.path.join(json_file, list_path[i]) 
    if os.path.isfile(path) & path.endswith('.json'):
        data = json.load(open(path))  
        imageData = data.get('imageData')
        if not imageData:
            imagePath = os.path.join(os.path.dirname(json_file), data['imagePath'])
            with open(imagePath, 'rb') as f:
                imageData = f.read()
                imageData = base64.b64encode(imageData).decode('utf-8')
                
        img = utils.img_b64_to_arr(imageData)
        shapes = []
        for shape in data['shapes']:
            if shape['shape_type'] == 'polygon' and len(shape['points']) >= 3:
                shapes.append(shape)

        label_name_to_value = {'_background_': 0}
        for shape in sorted(shapes, key=lambda x: x['label']):
            label_name = shape['label']
            if label_name in label_name_to_value:
                label_value = label_name_to_value[label_name] 
            else:
                label_value = len(label_name_to_value)
                label_name_to_value[label_name] = label_value
                
        lbl, _ = utils.shapes_to_label(img.shape, shapes, label_name_to_value)

        
        mask_dst = img_as_ubyte(lbl)        
        out_img = Image.fromarray(np.uint8(mask_dst)*255)
        
        file_name = osp.splitext(osp.basename(path))[0]
        out_img.save(osp.join(mask_dir, file_name + '.png'))
        
for i in range(0, len(list_path)):
    path = os.path.join(json_file, list_path[i])
    if os.path.isfile(path) & path.endswith('.jpg'):
        os.remove(path)

wurenkai avatar Apr 07 '24 04:04 wurenkai