Multi-Modality-Arena icon indicating copy to clipboard operation
Multi-Modality-Arena copied to clipboard

Model performance and evaluation metrics in the OmniMedVQA dataset

Open Lycus99 opened this issue 7 months ago • 0 comments

Thanks for your work! After reading the paper OmniMedVQA, I have two questions and sincerely look forward to the answers.

  1. From the paper of MedVInT and RadFM, the dataset used in the radfm model is larger than that of medvint (16M vs. 1.64M). However, the performance of medvint is better than radfm in your paper. Do you further analyze the prediction results of the two models?

  2. QA scores and prefix-based scores are distributed differently across image modalities. Which metric is more useful when selecting a model under a certain modality?

Lycus99 avatar Jul 16 '24 13:07 Lycus99