MONAI icon indicating copy to clipboard operation
MONAI copied to clipboard

Multi instance learning with data from multiple files

Open ChristianEschen opened this issue 2 years ago • 4 comments

I am trying to build a multi-instance learning pipeline for multiple files (dicom files). I am inspired by the multi-instance learning workflow for WSI data described in this notebook: (https://github.com/Project-MONAI/tutorials/tree/main/pathology/multiple_instance_learning). However, I am having trouble finding an elegant way to build the transforms that loads the patches originating from multiple files, and combine them into a single tensor similar to the transforms presented in the WSI example. The images stored in the files do not same the same shape.

The input files could be a list of dicts like this: [{images: [path1, path2, path3, path4], label: 0}, {images: [path5, path6, path7, path8, path9], label: 1} ...]

And it should return a tensor with the following shape (the first bag with label 0): [batch size, patches (4), CH, H, W]

I hope you can help me out. best regards

ChristianEschen avatar Jul 06 '22 18:07 ChristianEschen

Hi @drbeh ,

Could you please help share some best practices about this question?

Thanks in advance.

Nic-Ma avatar Jul 07 '22 01:07 Nic-Ma

So I figured out the main isssue is that loading multiple dicom files, the "loadimage" function tries to stack the images as it assumes the images are different modalities with the same shape and have the same affine matrix.

The files can be found here: ###################################################### http://www.rubomedical.com/dicom_files/dicom_viewer_0002.zip http://www.rubomedical.com/dicom_files/dicom_viewer_0003.zip

The minimum code to reproduce is: ######################################################

import logging
import os
import sys

import numpy as np
import torch
from torch.utils.data import DataLoader
import monai
print('monai vers', monai.__version__)
import pydicom
from monai.transforms import Activations, AddChanneld, AsDiscrete, Compose, LoadImaged, RandRotate90d, Resized, ScaleIntensityd, EnsureTyped, EnsureType, ToTensord, EnsureChannelFirstD, RandSpatialCropd, Spacingd, SqueezeDimd
from monai.transforms import LoadImage

def main():
    path1 = "/home/sauroman/mia/data/angio/2/0002.DCM"
    path2 = "/home/sauroman/mia/data/angio/3/0003.DCM"

    images = [
        [path1,
         path2],
        [path1,
         path2]
    ]

    data, meta = LoadImage()([path1, path2])
    

if __name__ == "__main__":
    main()

ChristianEschen avatar Jul 07 '22 10:07 ChristianEschen

So the problem occurs if multiple dicom images of different size are passed to the loadimage transform. This is similar to the comment from rijobro: https://github.com/Project-MONAI/MONAI/issues/3588#issuecomment-1006703229

ChristianEschen avatar Jul 07 '22 11:07 ChristianEschen

Hi @ChristianEschen,

This seems to be an intended behavior since if the sizes of images are different there is no trivial way to concatenate them (apart from some arbitrary choices like padding to the largest size). It means that a Resize transform would be needed to make them equal in size before concatenating them.

However, for your use case I understand that you want to have batch of patches [BxPxCxHxW]. One way that I can think of is to write a Dataset, load the images, create a list of dictionaries for each set of images image [{"image": img1}, {"image": img2}] and then the rest should be similar to before. Also you should use an appropriate collate function, similar to here: https://github.com/Project-MONAI/tutorials/blob/b429c883dccde3bfaa16c7c42fc2a35a8157fc3f/pathology/multiple_instance_learning/panda_mil_train_evaluate_pytorch_gpu.py#L304-L315

@rijobro, do you know a way to achieve the same thing without writing a new Dataset? In other words, Christian wants to stack a group of dicom files with "different sizes" to create a single array, and then have a batch of them. Thanks

drbeh avatar Jul 07 '22 13:07 drbeh

close maybe out of scope . alternative implementations exist.

vikashg avatar Jan 05 '24 14:01 vikashg