CONCH
CONCH copied to clipboard
A vision-language foundation model for computational pathology - Nature Medicine
Hi, I was wondering if CONCH is able to directly convert an image to text? From the code, it seems like CONCH is only available for "image-to-text retrieval," meaning that...
Amazing work! Congratulations! Could you share the information of what slides did you use a test set in downstream tasks? I am interested in slide-level predictions for the TCGA datasets....
Excellent work. May I ask if you have plans to release 1.17 million image-caption pairs?
we'd like to follow your excellent work , but can you provide your segmentation code or demo?
Regarding the model's architecture, I have a question. CONCH seems to be the same as [coca](https://arxiv.org/pdf/2205.01917v2). Where exactly are their differences?
Hello! The paper mentioned that TCGA-NSCLC data was used for validation, which contains a total of 150 samples. Could you please provide the names of these 150 test samples?
Hi, Thanks for sharing your work! Can you please also share the training setup of CONCH? I want to fine-tune CONCH so it can properly caption my H&E data. I...
Hi! Congrat. Can you release the image-caption pairs collected from PubMed?
Added correct parameter.
Hi, I received below warning when trying to load the model. Is it okay? Thanks! conch/open_clip_custom/factory.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the...