Bin Li
Bin Li
> In this way the magnitude become x10 right? is your embedder trained under this magnitude? Since it is inside the folder called x20 I didn't expect it I think...
The code works with multi-class labels. The labels need to be presented as distributed encoded binary vectors. For example, [0, 0, 1], [0, 1, 0], [1, 0, 0] each encodes...
> Thanks for your answer. I still have some questions. There are different types of patches in a slide, and we choose the highest-rank type as the slide-level label. How...
The performance of patch-based training (without considering MIL) majorly depends on your dataset. If your positive bags are highly unbalanced, i.e., there are a lot of negative patches in a...
Please check out the following terminal scrollback logs. The number of patches will be slightly different for different background thresholds but the results should look similar. You should double check...
I tried using ImageNet pretrained ResNet18 on the TCGA dataset without normalizing the input image patches, it worked decently well. Make sure the model uses BatchNorm (not InstanceNorm which is...
https://drive.google.com/drive/folders/1wHyaZkpgVGSoxPpaCeUCAFGhxcS9ZZEA?usp=sharing
https://drive.google.com/drive/folders/1v0ZEgSIYgriYuRn_O2p0t3RXdFHtiA61?usp=sharing
Regarding 'fuse' and 'cat', you can find them in the code: https://github.com/binli123/dsmil-wsi/blob/5216150fe7a1021c94ab1a2af6c942073bd37c4f/compute_feats.py#L110-L113 The multiscale features of camelyon16 used `model-v2` in this [link](https://drive.google.com/drive/folders/14pSKk2rnPJiJsGK2CQJXctP7fhRJZiyn?usp=sharing) (you can also find this in the readme)...
I think I have explained it in my previous answer that the multiscale feature were computed using model-v2 and the 20x feature downloaded from the script were computed using model-v0...