RobustVLM icon indicating copy to clipboard operation
RobustVLM copied to clipboard

Classification evaluation for LLaVA

Open rishika2110 opened this issue 1 year ago • 5 comments

Hi, currently, the code throws a NotImplementedError for LLaVA, but I believe the paper demonstrates zero-shot classification on LLaVA. When will the code be updated to include this feature? Alternatively, could you point out the main parts that would need significant changes to incorporate LLaVA?

Thank you.

rishika2110 avatar Mar 05 '24 20:03 rishika2110

Hi, thanks for asking. We demonstrate zero-shot classification only for the CLIP models on their own and consider LLaVA and OpenFlamingo for captioning/VQA tasks.

chs20 avatar Mar 05 '24 22:03 chs20

Thank you for the clarification. I have another question: Why is the batch size hardcoded to 1? Is it just to avoid padding text tokens? Or am I missing something?

rishika2110 avatar Mar 07 '24 16:03 rishika2110

You're right, it should definitely be possible to run with larger batch sizes, it's just hardcoded to batch_size 1 in a few places since we couldn't fit much more on our devices anyway for adversarial evaluations

chs20 avatar Mar 09 '24 10:03 chs20

Hi, thank you so much for clarifying everything. Just one last question: does the code use beam search to generate the outputs?

rishika2110 avatar Apr 23 '24 21:04 rishika2110

No problem :) We basically stick to how the models are evaluated in their respective papers, so greedy decoding without beam-search for LLaVA, and beam search with 3 beams for OpenFlamingo.

chs20 avatar May 14 '24 12:05 chs20