Richard Chen

Results 23 comments of Richard Chen
trafficstars

Hi @eran88 - I edited the default architecture to be `vit_xs`. In can also change the import statements to import your desired architecture.

Hi @ramprs21 - see https://github.com/mahmoodlab/HIPT/tree/master/3-Self-Supervised-Eval/embeddings_slide_lib.

Hi @ramprs21 - apologies for the confusion. The previous link refers to the already pre-extracted "region-level" feature embeddings for each slide in TCGA. Regarding the *.pt files for hierarchical pretraining,...

Hi @ramprs21 You are right that the model is essentially trained with "1 epoch". We reported training in terms of iterations to [avoid confusion](https://twitter.com/karpathy/status/1508437725514510336), but seems that reporting iterations can...

Hi @ramprs21 - Thank you for the note. I will reflect it soon in the README. Pretraining required 2-4x A100s (for batch size of 256), and took ~two weeks. To...

Hi @FabianHoerst 1. Even with regions with large blank spaces / have high background ratio, two-stage HIPT should still learn relevant region-level embeddings. From my own experimentation, inference + evaluation...

Hi @pazadimo - will send an update soon. I am still thinking whether it would be better to re-hash the MCAT survival code in this repository, or update MCAT with...

the ```vision_transformer.py``` class is [linked here.](https://github.com/mahmoodlab/HIPT/blob/master/HIPT_4K/vision_transformer.py). Where are you calling your import statement?

Hi @jjhbw - I see your pull request and close it soon. @giulianobertoti - if this is still giving you trouble, as mentioned, the `vision_transformer.py` class is found [here](https://github.com/mahmoodlab/HIPT/blob/master/HIPT_4K/vision_transformer.py). You...

Hi @lschctt @chenhao1208 @nauyan - sorry for the delay in response. Is the data not in .`/TCGA_GBMLGG/all_st_cpc_img/pt_bi/`?