Bin Li
Bin Li
It will be detected as positive for both classes and possibly false-positive for one of the classes.
> Hi, thanks for your fabulous project. > > May we get some information from your experience on the training speed of each part of the pipeline? e.g., how long...
https://drive.google.com/drive/folders/1jd5qbpZ0fdqJdH3FYQaTimxYHkzNa_bW?usp=sharing ResNet18 with instance normalization for the 20x patches (at level 1, level 0 is 40x). I will also upload precomputed features for Camelyon16 soon.
> With the provided pretrain weight of SIMCLR, I had tried to compute the embeddings and apply to train_tcga.py (with dataset adaption). > However, the AUC results is about 0.66...
> Currently, I found that your code will assign the label 1 for "normal" and 0 "tumor" in camelyon16 dataset, which is because the sorting algorithm did not follow alphabetic...
Trained models are updated to this link now: https://drive.google.com/drive/folders/14pSKk2rnPJiJsGK2CQJXctP7fhRJZiyn?usp=sharing. These models are trained with different settings and a shorter training time & smaller batch size will lead to representations that...
You could try to check the `openslide.PROPERTY_NAME_MPP_X` property of the file via [OpenSlide](https://openslide.org/api/python/) and grab the level with the same micron per pixel for 20x magnification. Or you can separate...
> Hi, > > In the deepzoom_tiler.py code, you have used "n_levels-level[-1]-1" in organize patches function at line 208. I think it will create problem for some of the files...
> Hi, > > In the deepzoom_tiler.py code, you have used "n_levels-level[-1]-1" in organize patches function at line 208. I think it will create problem for some of the files...
I have updated the link: https://drive.google.com/drive/folders/14pSKk2rnPJiJsGK2CQJXctP7fhRJZiyn?usp=sharing These are all 20x models but with different training settings (batch size 512-4096 with different training time (5-20days))