tokenize-anything
tokenize-anything copied to clipboard
Caption branch
Thanks for your great work! In your project, the caption branch is trained only on VG data. This caption ability may be poor than the modal using large caption data and large language model. Have you the plan using large caption data training this modal or other future work?
Hi, @jetyingjia
-
Currently, high quality & human annotated region-level caption data are still limited. However, we find it helpful to further mix VG and SemanticSA-1B in a fully multimodal training stage on v1.1 pre-trained models without LLM.
-
TAP is also a strong image-level vision encoder for MLLMs (e.g., LLaVA). We find it could be a natural high-resolution replacement for any low resolution CLIP models and achieve comparable performance (VQAv2, GQA, MMB, ...).
Hi, @PhyscalX In my some cases, TAP have a good performance and very efficiently. Hope have a better version in the future.