Bin Li
Bin Li
> Hi, do you have any idea so far why non-contrastive methods didn't work as well as SimCLR? Thanks a lot! I think non-contrastive methods are more sensitive to training...
Hi, please make sure that the weights are indeed fully loaded into your model without mismatch; you can set `strict=True` in `torch.load()`. There are multiple `embedder.pth` files available, and the...
It could be that the trained model weights are not loaded correctly. You can remove the warning filter to check warnings from `load_state_dict`S. Make sure instance normalization is used for...
You can use a higher threshold. In the original experiment, a different metric was used to threshold the patches.
You can perform cross-validation on the training set, and find the best threshold by considering all folds.
https://github.com/binli123/dsmil-wsi#feature-vector-csv-files-explanation If you have a binary classification (true and false), it is regarded as 1 class. If you have two classes, and an optional negative class, it is regarded as...
> Implementation and performance seems fine. > > Data / epochs / model is too domain-specific to be meaningful, but I used 45 epochs of random init shufflenetv2 on millions...
> > > Implementation and performance seems fine. > > > Data / epochs / model is too domain-specific to be meaningful, but I used 45 epochs of random init...
I incorporated the training/testing into the same pipeline in the latest commit. I also incorporated an orthogonal weights initialization which helps making the training more table. You can set --eval_scheme=5-fold-cv-standalone-test...
I incorporated the training/testing into the same pipeline in the latest commit. You can set --eval_scheme=5-fold-cv-standalone-test which will perform a train/valid/test like this: >A standalone test set consisting of 20%...