Qingyun

Results 78 comments of Qingyun

It is an interesting experiments. Are you just inference with pretrained weight? or you've tuned VILA with the two strategies? Can you provide your example?

Thanks for the authors' support. I found the `./scripts/v1_5/eval/eval_all.sh` has been availiable. The evaluation tools of the **Few-shot VQA/Caption** is also essential for the researchers following this work. Looking forward...

> Hi Qingyun, > > Which evaluation scripts you are looking for VQA and caption? Current `eval_all.sh` should cover all metrics in the paper. @Lyken17 Thanks for your reply! I'm...

@Lyken17 I'm writing to request evaluation tools of the Few-shot VQA/Caption (Specifically, 4-shots OKVQA/TextVQA/CocoCaption/FlickrCaption in the ablation study of VILA Table 1/3). The experimental results validated that: when used for...

> cc' @kentang-mit and @Seerkfang who are more familar with evaluation scripts. @Lyken17 Okkk, thanks for your reply! Dear @kentang-mit and @Seerkfang: Could you please share few-shot evaluation scrips? It...

> I notice that AP of DINO-4scale using r50 is 49.0% in table 1, while DINO (ours, Row5+contrastive DN) in table 4 is 47.9%. Which setting or model design is...

> I wonder if it's an issue with this pypi mirror in China? xgrammar is up to 0.1.9 here: https://pypi.org/project/xgrammar/#history yes maybe, but i tried pip install xgrammar --index-url https://pypi.org/simple...

This issue work https://github.com/vllm-project/vllm/issues/11542 upgrade your glibc is all required.

@russellb I think it is. My cluster server got a lower version of glibc (2.27. I downloaded the wheel with wget and pip install the wheel, `ERROR: xgrammar-0.1.9-cp310-cp310-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl is not...