MedicalNet icon indicating copy to clipboard operation
MedicalNet copied to clipboard

Classification code with modification of train.py and datasets/brains18.py

Open loopnownow opened this issue 2 years ago • 17 comments

I'd like to share the code of classification with modification of train.py and datasets/brains18.py The code is based on JasperHG90's commented at #58 To deal with the issue of Classification, there are three main problems should be resolved. First, the format of the class labels. Second, how to add the labels of the class into the codes. Third, how to define an appropriate LOSS.

For the fist one, I simply add the class labels in the training data lists under data/train after the address of the masks Such as: MRBrainS18/images/75.nii.gz MRBrainS18/labels/75.nii.gz 0 MRBrainS18/images/14.nii.gz MRBrainS18/labels/14.nii.gz 0 MRBrainS18/images/148.nii.gz MRBrainS18/labels/148.nii.gz 0 MRBrainS18/images/4.nii.gz MRBrainS18/labels/4.nii.gz 0 MRBrainS18/images/5.nii.gz MRBrainS18/labels/5.nii.gz 0 MRBrainS18/images/7.nii.gz MRBrainS18/labels/7.nii.gz 1 MRBrainS18/images/71.nii.gz MRBrainS18/labels/71.nii.gz 1 MRBrainS18/images/72.nii.gz MRBrainS18/labels/72.nii.gz 1 MRBrainS18/images/73.nii.gz MRBrainS18/labels/73.nii.gz 1 MRBrainS18/images/74.nii.gz MRBrainS18/labels/74.nii.gz 1

For the second one, I modified the file of datasets/brains18.py. The modified code were followed by ##### The modified code were: import torch ##### line 43, label_name = os.path.join(self.root_dir, ith_info[1]) class_array = np.zeros((1)) ###### class_array[0] = ith_info[2] ##### class_array = torch.tensor(class_array, dtype=torch.float32) ######

line 59, return img_array, mask_array, class_array #####

For the third one, I modified the file of train.py. The modified code were followed by ##### The modified code were: from models import resnet ##### #the code below is from JasperHG90's commented at #58, any modification was followed by ##### (notification that I used model of resnet50 rather than resnet 34) class MedicalNet(nn.Module):

  def __init__(self, path_to_weights, device):
    super(MedicalNet, self).__init__()
    self.model = resnet.resnet50(sample_input_D=14, sample_input_H=112, sample_input_W=112, num_seg_classes=2)
    self.model.conv_seg = nn.Sequential(
        nn.AdaptiveMaxPool3d(output_size=(1, 1, 1)),
        nn.Flatten(start_dim=1),
        nn.Dropout(0.1)
    )
    net_dict = self.model.state_dict()
    pretrained_weights = torch.load(path_to_weights, map_location=torch.device(device))
    pretrain_dict = {
        k.replace("module.", ""): v for k, v in pretrained_weights['state_dict'].items() if k.replace("module.", "") in net_dict.keys()
      }
    net_dict.update(pretrain_dict)
    self.model.load_state_dict(net_dict)
    self.fc = nn.Linear(2048, 1)

  def forward(self, x):
    features = self.model(x)
    return torch.sigmoid_(self.fc(features)) #####

line 24, loss_seg = nn.BCELoss() ##### loss_seg = nn.CrossEntropyLoss(ignore_index=-1) line 43, volumes, label_masks, class_array = batch_data ##### volumes, label_masks = batch_data line 47, class_array = class_array.cuda() ##### line 66, loss_value_seg = loss_seg(out_masks, class_array) ##### loss_value_seg = loss_seg(out_masks, new_label_masks)
line 67, loss = loss_value_seg line 68, loss.requires_grad_(True) #####

line 118, model = MedicalNet(path_to_weights="pretrain/resnet_50.pth", device='cuda') #####model, parameters = generate_model(sets) line 119, model = MedicalNet(path_to_weights="pretrain/resnet_50.pth", device='cuda') #####print (model) line 120, model.cuda() ##### # optimizer

for param_name, param in model.named_parameters():
    if param_name.startswith("conv_seg"):
        param.requires_grad = True
    else:
        param.requires_grad = False

line 128, optimizer = torch.optim.SGD(model.parameters(), lr=1e-5,momentum=0.9, weight_decay=1e-3) #####optimizer = torch.optim.SGD(params, momentum=0.9, weight_decay=1e-3)

loopnownow avatar Dec 03 '22 07:12 loopnownow

Thank you for your reply and for trying the codes. Currently, the name "out_masks" is used for the result of the classification. Previously, the name "out_masks" was used for the result of the segmentation. The err rises because of the contradict of these two. Try to rem these codes in file train.py may solve the problem. # # resize label # [n, _, d, h, w] = out_masks.shape # new_label_masks = np.zeros([n, d, h, w]) # for label_id in range(n): # label_mask = label_masks[label_id] # [ori_c, ori_d, ori_h, ori_w] = label_mask.shape # label_mask = np.reshape(label_mask, [ori_d, ori_h, ori_w]) # scale = [d1.0/ori_d, h1.0/ori_h, w*1.0/ori_w] # label_mask = ndimage.interpolation.zoom(label_mask, scale, order=0) # new_label_masks[label_id] = label_mask # # new_label_masks = torch.tensor(new_label_masks).to(torch.int64) # if not sets.no_cuda: # new_label_masks = new_label_masks.cuda()

loopnownow avatar Dec 20 '22 09:12 loopnownow

I updated all the codes and four files were modified including 1. datasets/brains18.py 2. models/resnet.py 3. model.py and 4. train.py. The codes would be find below.

loopnownow avatar Dec 20 '22 10:12 loopnownow

''' Dataset for training Written by Whalechen '''

import math import os import random import torch import numpy as np from torch.utils.data import Dataset import nibabel from scipy import ndimage

class BrainS18Dataset(Dataset):

def __init__(self, root_dir, img_list, sets):
    with open(img_list, 'r') as f:
        self.img_list = [line.strip() for line in f]
    print("Processing {} datas".format(len(self.img_list)))
    self.root_dir = root_dir
    self.input_D = sets.input_D
    self.input_H = sets.input_H
    self.input_W = sets.input_W
    self.phase = sets.phase

def __nii2tensorarray__(self, data):
    [z, y, x] = data.shape
    new_data = np.reshape(data, [1, z, y, x])
    new_data = new_data.astype("float32")
        
    return new_data

def __len__(self):
    return len(self.img_list)

def __getitem__(self, idx):
    
    if self.phase == "train":
        # read image and labels
        ith_info = self.img_list[idx].split(" ")
        img_name = os.path.join(self.root_dir, ith_info[0])
        label_name = os.path.join(self.root_dir, ith_info[1])
        class_array = int(ith_info[2])
        class_array = torch.tensor(class_array, dtype=torch.long)
        assert os.path.isfile(img_name)
        assert os.path.isfile(label_name)
        img = nibabel.load(img_name)  # We have transposed the data from WHD format to DHW
        assert img is not None
        mask = nibabel.load(label_name)
        assert mask is not None
        
        # data processing
        img_array, mask_array = self.__training_data_process__(img, mask)

        # 2 tensor array
        img_array = self.__nii2tensorarray__(img_array)
        mask_array = self.__nii2tensorarray__(mask_array)

        assert img_array.shape ==  mask_array.shape, "img shape:{} is not equal to mask shape:{}".format(img_array.shape, mask_array.shape)
        return img_array, mask_array, class_array
    
    elif self.phase == "test":
        # read image
        ith_info = self.img_list[idx].split(" ")
        img_name = os.path.join(self.root_dir, ith_info[0])
        print(img_name)
        assert os.path.isfile(img_name)
        img = nibabel.load(img_name)
        assert img is not None

        # data processing
        img_array = self.__testing_data_process__(img)

        # 2 tensor array
        img_array = self.__nii2tensorarray__(img_array)

        return img_array
        

def __drop_invalid_range__(self, volume, label=None):
    """
    Cut off the invalid area
    """
    zero_value = volume[0, 0, 0]
    non_zeros_idx = np.where(volume != zero_value)
    
    [max_z, max_h, max_w] = np.max(np.array(non_zeros_idx), axis=1)
    [min_z, min_h, min_w] = np.min(np.array(non_zeros_idx), axis=1)
    
    if label is not None:
        return volume[min_z:max_z, min_h:max_h, min_w:max_w], label[min_z:max_z, min_h:max_h, min_w:max_w]
    else:
        return volume[min_z:max_z, min_h:max_h, min_w:max_w]


def __random_center_crop__(self, data, label):
    from random import random
    """
    Random crop
    """
    target_indexs = np.where(label>0)
    [img_d, img_h, img_w] = data.shape
    [max_D, max_H, max_W] = np.max(np.array(target_indexs), axis=1)
    [min_D, min_H, min_W] = np.min(np.array(target_indexs), axis=1)
    [target_depth, target_height, target_width] = np.array([max_D, max_H, max_W]) - np.array([min_D, min_H, min_W])
    Z_min = int((min_D - target_depth*1.0/2) * random())
    Y_min = int((min_H - target_height*1.0/2) * random())
    X_min = int((min_W - target_width*1.0/2) * random())
    
    Z_max = int(img_d - ((img_d - (max_D + target_depth*1.0/2)) * random()))
    Y_max = int(img_h - ((img_h - (max_H + target_height*1.0/2)) * random()))
    X_max = int(img_w - ((img_w - (max_W + target_width*1.0/2)) * random()))
   
    Z_min = np.max([0, Z_min])
    Y_min = np.max([0, Y_min])
    X_min = np.max([0, X_min])

    Z_max = np.min([img_d, Z_max])
    Y_max = np.min([img_h, Y_max])
    X_max = np.min([img_w, X_max])

    Z_min = int(Z_min)
    Y_min = int(Y_min)
    X_min = int(X_min)
    
    Z_max = int(Z_max)
    Y_max = int(Y_max)
    X_max = int(X_max)

    return data[Z_min: Z_max, Y_min: Y_max, X_min: X_max], label[Z_min: Z_max, Y_min: Y_max, X_min: X_max]



def __itensity_normalize_one_volume__(self, volume):
    """
    normalize the itensity of an nd volume based on the mean and std of nonzeor region
    inputs:
        volume: the input nd volume
    outputs:
        out: the normalized nd volume
    """
    
    pixels = volume[volume > 0]
    mean = pixels.mean()
    std  = pixels.std()
    out = (volume - mean)/std
    out_random = np.random.normal(0, 1, size = volume.shape)
    out[volume == 0] = out_random[volume == 0]
    return out

def __resize_data__(self, data):
    """
    Resize the data to the input size
    """ 
    [depth, height, width] = data.shape
    scale = [self.input_D*1.0/depth, self.input_H*1.0/height, self.input_W*1.0/width]  
    data = ndimage.interpolation.zoom(data, scale, order=0)

    return data


def __crop_data__(self, data, label):
    """
    Random crop with different methods:
    """ 
    # random center crop
    data, label = self.__random_center_crop__ (data, label)
    
    return data, label

def __training_data_process__(self, data, label): 
    # crop data according net input size
    data = data.get_data()
    label = label.get_data()
    
    # drop out the invalid range
    data, label = self.__drop_invalid_range__(data, label)
    
    # crop data
    data, label = self.__crop_data__(data, label) 

    # resize data
    data = self.__resize_data__(data)
    label = self.__resize_data__(label)

    # normalization datas
    data = self.__itensity_normalize_one_volume__(data)

    return data, label


def __testing_data_process__(self, data): 
    # crop data according net input size
    data = data.get_data()

    # resize data
    data = self.__resize_data__(data)

    # normalization datas
    data = self.__itensity_normalize_one_volume__(data)

    return data

loopnownow avatar Dec 20 '22 10:12 loopnownow

import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import math from functools import partial

all = [ 'ResNet', 'resnet10', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnet200' ]

def conv3x3x3(in_planes, out_planes, stride=1, dilation=1): # 3x3x3 convolution with padding return nn.Conv3d( in_planes, out_planes, kernel_size=3, dilation=dilation, stride=stride, padding=dilation, bias=False)

def downsample_basic_block(x, planes, stride, no_cuda=False): out = F.avg_pool3d(x, kernel_size=1, stride=stride) zero_pads = torch.Tensor( out.size(0), planes - out.size(1), out.size(2), out.size(3), out.size(4)).zero_() if not no_cuda: if isinstance(out.data, torch.cuda.FloatTensor): zero_pads = zero_pads.cuda()

out = Variable(torch.cat([out.data, zero_pads], dim=1))

return out

class BasicBlock(nn.Module): expansion = 1

def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None):
    super(BasicBlock, self).__init__()
    self.conv1 = conv3x3x3(inplanes, planes, stride=stride, dilation=dilation)
    self.bn1 = nn.BatchNorm3d(planes)
    self.relu = nn.ReLU(inplace=True)
    self.conv2 = conv3x3x3(planes, planes, dilation=dilation)
    self.bn2 = nn.BatchNorm3d(planes)
    self.downsample = downsample
    self.stride = stride
    self.dilation = dilation

def forward(self, x):
    residual = x

    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)
    out = self.conv2(out)
    out = self.bn2(out)

    if self.downsample is not None:
        residual = self.downsample(x)

    out += residual
    out = self.relu(out)

    return out

class Bottleneck(nn.Module): expansion = 4

def __init__(self, inplanes, planes, stride=1, dilation=1, downsample=None):
    super(Bottleneck, self).__init__()
    self.conv1 = nn.Conv3d(inplanes, planes, kernel_size=1, bias=False)
    self.bn1 = nn.BatchNorm3d(planes)
    self.conv2 = nn.Conv3d(
        planes, planes, kernel_size=3, stride=stride, dilation=dilation, padding=dilation, bias=False)
    self.bn2 = nn.BatchNorm3d(planes)
    self.conv3 = nn.Conv3d(planes, planes * 4, kernel_size=1, bias=False)
    self.bn3 = nn.BatchNorm3d(planes * 4)
    self.relu = nn.ReLU(inplace=True)
    self.downsample = downsample
    self.stride = stride
    self.dilation = dilation

def forward(self, x):
    residual = x

    out = self.conv1(x)
    out = self.bn1(out)
    out = self.relu(out)

    out = self.conv2(out)
    out = self.bn2(out)
    out = self.relu(out)

    out = self.conv3(out)
    out = self.bn3(out)

    if self.downsample is not None:
        residual = self.downsample(x)

    out += residual
    out = self.relu(out)

    return out

class ResNet(nn.Module):

def __init__(self,
             block,
             layers,
             sample_input_D,
             sample_input_H,
             sample_input_W,
             num_seg_classes,
             shortcut_type='B',
             no_cuda = False):
    self.inplanes = 64
    self.no_cuda = no_cuda
    super(ResNet, self).__init__()
    self.conv1 = nn.Conv3d(
        1,
        64,
        kernel_size=7,
        stride=(2, 2, 2),
        padding=(3, 3, 3),
        bias=False)
        
    self.bn1 = nn.BatchNorm3d(64)
    self.relu = nn.ReLU(inplace=True)
    self.maxpool = nn.MaxPool3d(kernel_size=(3, 3, 3), stride=2, padding=1)
    self.layer1 = self._make_layer(block, 64, layers[0], shortcut_type)
    self.layer2 = self._make_layer(
        block, 128, layers[1], shortcut_type, stride=2)
    self.layer3 = self._make_layer(
        block, 256, layers[2], shortcut_type, stride=1, dilation=2)
    self.layer4 = self._make_layer(
        block, 512, layers[3], shortcut_type, stride=1, dilation=4)

    # self.conv_seg = nn.Sequential(
    #                                 nn.ConvTranspose3d(
    #                                 512 * block.expansion,
    #                                 32,
    #                                 2,
    #                                 stride=2
    #                                 ),
    #                                 nn.BatchNorm3d(32),
    #                                 nn.ReLU(inplace=True),
    #                                 nn.Conv3d(
    #                                 32,
    #                                 32,
    #                                 kernel_size=3,
    #                                 stride=(1, 1, 1),
    #                                 padding=(1, 1, 1),
    #                                 bias=False),
    #                                 nn.BatchNorm3d(32),
    #                                 nn.ReLU(inplace=True),
    #                                 nn.Conv3d(
    #                                 32,
    #                                 num_seg_classes,
    #                                 kernel_size=1,
    #                                 stride=(1, 1, 1),
    #                                 bias=False)
    #                                 )
    self.conv_seg = nn.Sequential(
                            nn.AdaptiveMaxPool3d(output_size=(1, 1, 1)),
                            nn.Flatten(start_dim=1),
                            nn.Dropout(0.1),
                            nn.Linear(512 * block.expansion, num_seg_classes)
                            )

    for m in self.modules():
        if isinstance(m, nn.Conv3d):
            m.weight = nn.init.kaiming_normal(m.weight, mode='fan_out')
        elif isinstance(m, nn.BatchNorm3d):
            m.weight.data.fill_(1)
            m.bias.data.zero_()

def _make_layer(self, block, planes, blocks, shortcut_type, stride=1, dilation=1):
    downsample = None
    if stride != 1 or self.inplanes != planes * block.expansion:
        if shortcut_type == 'A':
            downsample = partial(
                downsample_basic_block,
                planes=planes * block.expansion,
                stride=stride,
                no_cuda=self.no_cuda)
        else:
            downsample = nn.Sequential(
                nn.Conv3d(
                    self.inplanes,
                    planes * block.expansion,
                    kernel_size=1,
                    stride=stride,
                    bias=False), nn.BatchNorm3d(planes * block.expansion))

    layers = []
    layers.append(block(self.inplanes, planes, stride=stride, dilation=dilation, downsample=downsample))
    self.inplanes = planes * block.expansion
    for i in range(1, blocks):
        layers.append(block(self.inplanes, planes, dilation=dilation))

    return nn.Sequential(*layers)

def forward(self, x):
    x = self.conv1(x)
    x = self.bn1(x)
    x = self.relu(x)
    x = self.maxpool(x)
    x = self.layer1(x)
    x = self.layer2(x)
    x = self.layer3(x)
    x = self.layer4(x)
    x = self.conv_seg(x)
    x = torch.sigmoid_(x)
    return x

def resnet10(**kwargs): """Constructs a ResNet-18 model. """ model = ResNet(BasicBlock, [1, 1, 1, 1], **kwargs) return model

def resnet18(**kwargs): """Constructs a ResNet-18 model. """ model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs) return model

def resnet34(**kwargs): """Constructs a ResNet-34 model. """ model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs) return model

def resnet50(**kwargs): """Constructs a ResNet-50 model. """ model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs) return model

def resnet101(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs) return model

def resnet152(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs) return model

def resnet200(**kwargs): """Constructs a ResNet-101 model. """ model = ResNet(Bottleneck, [3, 24, 36, 3], **kwargs) return model

loopnownow avatar Dec 20 '22 10:12 loopnownow

import torch from torch import nn from models import resnet

def generate_model(opt): assert opt.model in [ 'resnet' ]

if opt.model == 'resnet':
    assert opt.model_depth in [10, 18, 34, 50, 101, 152, 200]
    
    if opt.model_depth == 10:
        model = resnet.resnet10(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 18:
        model = resnet.resnet18(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 34:
        model = resnet.resnet34(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 50:
        model = resnet.resnet50(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 101:
        model = resnet.resnet101(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 152:
        model = resnet.resnet152(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)
    elif opt.model_depth == 200:
        model = resnet.resnet200(
            sample_input_W=opt.input_W,
            sample_input_H=opt.input_H,
            sample_input_D=opt.input_D,
            shortcut_type=opt.resnet_shortcut,
            no_cuda=opt.no_cuda,
            num_seg_classes=opt.n_seg_classes)

if not opt.no_cuda:
    if len(opt.gpu_id) > 1:
        model = model.cuda() 
        model = nn.DataParallel(model, device_ids=opt.gpu_id)
        net_dict = model.state_dict() 
    else:
        import os
        os.environ["CUDA_VISIBLE_DEVICES"]=str(opt.gpu_id[0])
        model = model.cuda() 
        model = nn.DataParallel(model, device_ids=None)
        net_dict = model.state_dict()
else:
    net_dict = model.state_dict()

# load pretrain
if opt.phase != 'test' and opt.pretrain_path:
    print ('loading pretrained model {}'.format(opt.pretrain_path))
    pretrain = torch.load(opt.pretrain_path)
    # pretrain_dict = {k: v for k, v in pretrain['state_dict'].items() if k in net_dict.keys()}
    pretrain_dict = {k.replace("module.", ""): v for k, v in pretrain['state_dict'].items() if k.replace("module.", "") in net_dict.keys()}
    net_dict.update(pretrain_dict)
    model.load_state_dict(net_dict)

    new_parameters = [] 
    for pname, p in model.named_parameters():
        for layer_name in opt.new_layer_names:
            if pname.find(layer_name) >= 0:
                new_parameters.append(p)
                break

    new_parameters_id = list(map(id, new_parameters))
    base_parameters = list(filter(lambda p: id(p) not in new_parameters_id, model.parameters()))
    parameters = {'base_parameters': base_parameters, 
                  'new_parameters': new_parameters}

    return model, parameters

return model, model.parameters()

loopnownow avatar Dec 20 '22 10:12 loopnownow

''' Training code for MRBrainS18 datasets segmentation Written by Whalechen '''

from setting import parse_opts from datasets.brains18 import BrainS18Dataset from model import generate_model import torch import numpy as np from torch import nn from torch import optim from torch.optim import lr_scheduler from torch.utils.data import DataLoader import time from utils.logger import log from scipy import ndimage import os

def train(data_loader, model, optimizer, scheduler, total_epochs, save_interval, save_folder, sets): # settings batches_per_epoch = len(data_loader) log.info('{} epochs in total, {} batches per epoch'.format(total_epochs, batches_per_epoch)) loss_seg = nn.CrossEntropyLoss(ignore_index=-1)

print("Current setting is:")
print(sets)
print("\n\n")     
if not sets.no_cuda:
    loss_seg = loss_seg.cuda()
    
model.train()
train_time_sp = time.time()
for epoch in range(total_epochs):
    log.info('Start epoch {}'.format(epoch))
    
    scheduler.step()
    log.info('lr = {}'.format(scheduler.get_lr()))
    
    for batch_id, batch_data in enumerate(data_loader):
        # getting data batch
        batch_id_sp = epoch * batches_per_epoch
        volumes, label_masks, class_array = batch_data

        if not sets.no_cuda: 
            volumes = volumes.cuda()
            class_array = class_array.cuda()

        optimizer.zero_grad()
        out_masks = model(volumes)
        # # resize label
        # [n, _, d, h, w] = out_masks.shape
        # new_label_masks = np.zeros([n, d, h, w])
        # for label_id in range(n):
        #     label_mask = label_masks[label_id]
        #     [ori_c, ori_d, ori_h, ori_w] = label_mask.shape
        #     label_mask = np.reshape(label_mask, [ori_d, ori_h, ori_w])
        #     scale = [d*1.0/ori_d, h*1.0/ori_h, w*1.0/ori_w]
        #     label_mask = ndimage.interpolation.zoom(label_mask, scale, order=0)
        #     new_label_masks[label_id] = label_mask
        #
        # new_label_masks = torch.tensor(new_label_masks).to(torch.int64)
        # if not sets.no_cuda:
        #     new_label_masks = new_label_masks.cuda()

        # calculating loss
        loss_value_seg = loss_seg(out_masks, class_array)
        loss = loss_value_seg
        loss.backward()                
        optimizer.step()

        avg_batch_time = (time.time() - train_time_sp) / (1 + batch_id_sp)
        log.info(
                'Batch: {}-{} ({}), loss = {:.3f}, loss_seg = {:.3f}, avg_batch_time = {:.3f}'\
                .format(epoch, batch_id, batch_id_sp, loss.item(), loss_value_seg.item(), avg_batch_time))
      
        if not sets.ci_test:
            # save model
            if batch_id == 0 and batch_id_sp != 0 and batch_id_sp % save_interval == 0:
            #if batch_id_sp != 0 and batch_id_sp % save_interval == 0:
                model_save_path = '{}_epoch_{}_batch_{}.pth.tar'.format(save_folder, epoch, batch_id)
                model_save_dir = os.path.dirname(model_save_path)
                if not os.path.exists(model_save_dir):
                    os.makedirs(model_save_dir)
                
                log.info('Save checkpoints: epoch = {}, batch_id = {}'.format(epoch, batch_id)) 
                torch.save({
                            'ecpoch': epoch,
                            'batch_id': batch_id,
                            'state_dict': model.state_dict(),
                            'optimizer': optimizer.state_dict()},
                            model_save_path)
                        
print('Finished training')            
if sets.ci_test:
    exit()

if name == 'main': # settting sets = parse_opts()
if sets.ci_test: sets.img_list = './toy_data/test_ci.txt' sets.n_epochs = 1 sets.no_cuda = True sets.data_root = './toy_data' sets.pretrain_path = '' sets.num_workers = 0 sets.model_depth = 10 sets.resnet_shortcut = 'A' sets.input_D = 14 sets.input_H = 28 sets.input_W = 28

# getting model
torch.manual_seed(sets.manual_seed)
model, parameters = generate_model(sets) 
print (model)
# optimizer
if sets.ci_test:
    params = [{'params': parameters, 'lr': sets.learning_rate}]
else:
    params = [
            { 'params': parameters['base_parameters'], 'lr': sets.learning_rate }, 
            { 'params': parameters['new_parameters'], 'lr': sets.learning_rate*100 }
            ]
optimizer = torch.optim.SGD(params, momentum=0.9, weight_decay=1e-3)   
scheduler = optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.99)

# train from resume
if sets.resume_path:
    if os.path.isfile(sets.resume_path):
        print("=> loading checkpoint '{}'".format(sets.resume_path))
        checkpoint = torch.load(sets.resume_path)
        model.load_state_dict(checkpoint['state_dict'])
        optimizer.load_state_dict(checkpoint['optimizer'])
        print("=> loaded checkpoint '{}' (epoch {})"
          .format(sets.resume_path, checkpoint['epoch']))

# getting data
sets.phase = 'train'
if sets.no_cuda:
    sets.pin_memory = False
else:
    sets.pin_memory = True    
training_dataset = BrainS18Dataset(sets.data_root, sets.img_list, sets)
data_loader = DataLoader(training_dataset, batch_size=sets.batch_size, shuffle=True, num_workers=sets.num_workers, pin_memory=sets.pin_memory)

# training
train(data_loader, model, optimizer, scheduler, total_epochs=sets.n_epochs, save_interval=sets.save_intervals, save_folder=sets.save_folder, sets=sets) 

loopnownow avatar Dec 20 '22 10:12 loopnownow

thnak you for your responding i applied these update code and got this error:

File "train.py", line 69, in train loss_value_seg = loss_seg(out_masks, class_array) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/loss.py", line 619, in forward return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 3086, in binary_cross_entropy raise ValueError( ValueError: Using a target size (torch.Size([1, 1])) that is different to the input size (torch.Size([1, 2])) is deprecated. Please ensure they have the same size.

out_masks = tensor([[0.0005, 0.0046]], device='cuda:0', grad_fn=<SigmoidBackward0>) class_array = tensor([[0.]], device='cuda:0')

hourieh77 avatar Dec 20 '22 11:12 hourieh77

This err rises if the input format of LOSS and label mismatch. You might check these codes In Train.py: loss_seg = nn.CrossEntropyLoss(ignore_index=-1) In brains18.py: class_array = int(ith_info[2]) class_array = torch.tensor(class_array, dtype=torch.long) In resnet.py: self.conv_seg = nn.Sequential( nn.AdaptiveMaxPool3d(output_size=(1, 1, 1)), nn.Flatten(start_dim=1), nn.Dropout(0.1), nn.Linear(512 * block.expansion, num_seg_classes) )

loopnownow avatar Dec 20 '22 11:12 loopnownow

The new layer's name should be "conv_seg" in resnet.py (for not bothering to change the setting.py file).

loopnownow avatar Dec 20 '22 11:12 loopnownow

thank you very much. The error fixed.

hourieh77 avatar Dec 20 '22 11:12 hourieh77

thx for your work but how about test?

MJachen avatar Mar 28 '23 02:03 MJachen

This err rises if the input format of LOSS and label mismatch. You might check these codes In Train.py: loss_seg = nn.CrossEntropyLoss(ignore_index=-1) In brains18.py: class_array = int(ith_info[2]) class_array = torch.tensor(class_array, dtype=torch.long) In resnet.py: self.conv_seg = nn.Sequential( nn.AdaptiveMaxPool3d(output_size=(1, 1, 1)), nn.Flatten(start_dim=1), nn.Dropout(0.1), nn.Linear(512 * block.expansion, num_seg_classes) )

Thank you for your work. What is your version of Pytorch? Torch0.4.1 reported an error AttributeError: module 'torch. nn' has no attribute 'Flatten'。 @loopnownow

Shame-fight avatar Jun 27 '23 10:06 Shame-fight

thank you very much. The error fixed.

how about your result?My result is bad and makes me confused...

szh404 avatar Jan 18 '24 11:01 szh404

Hi My result was too bad ... So I did not use this model but I wrote it's result in my thesis.

On Thu, Jan 18, 2024, 2:42 PM Shaozihan @.***> wrote:

thank you very much. The error fixed.

how about your result?My result is bad and makes me confused...

— Reply to this email directly, view it on GitHub https://github.com/Tencent/MedicalNet/issues/79#issuecomment-1898276961, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQR6RB2O47DCYTWK7DW3W2TYPD7Q3AVCNFSM6AAAAAASSS6E7CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJYGI3TMOJWGE . You are receiving this because you commented.Message ID: @.***>

hourieh77 avatar Jan 22 '24 06:01 hourieh77

谢谢。错误已修复。

你的结果怎么样?我的结果很糟糕,让我感到困惑...... Hello, can you share your related code? Want to learn. My email is [email protected]. Thank you so much

hczyni avatar Mar 11 '24 11:03 hczyni

谢谢。错误已修复。

你的结果怎么样?我的结果很糟糕,让我感到困惑...... Hello, can you share your related code? Want to learn. My email is [email protected]. Thank you so much

Hi I'm so sorry that I reinstalled the computer system and did not keep the previous data.

szh404 avatar Mar 12 '24 03:03 szh404

The result are not reproducible please provide some insights on the f1 score

asmit203 avatar Jun 27 '24 01:06 asmit203