tokenize-anything icon indicating copy to clipboard operation
tokenize-anything copied to clipboard

Caption branch

Open jetyingjia opened this issue 1 year ago • 2 comments

Thanks for your great work! In your project, the caption branch is trained only on VG data. This caption ability may be poor than the modal using large caption data and large language model. Have you the plan using large caption data training this modal or other future work?

jetyingjia avatar Jun 19 '24 08:06 jetyingjia

Hi, @jetyingjia

  1. Currently, high quality & human annotated region-level caption data are still limited. However, we find it helpful to further mix VG and SemanticSA-1B in a fully multimodal training stage on v1.1 pre-trained models without LLM.

  2. TAP is also a strong image-level vision encoder for MLLMs (e.g., LLaVA). We find it could be a natural high-resolution replacement for any low resolution CLIP models and achieve comparable performance (VQAv2, GQA, MMB, ...).

PhyscalX avatar Jun 20 '24 02:06 PhyscalX

Hi, @PhyscalX In my some cases, TAP have a good performance and very efficiently. Hope have a better version in the future.

jetyingjia avatar Jun 21 '24 02:06 jetyingjia