Yongshuo Zong

Results 7 issues of Yongshuo Zong

### What is the problem this feature will solve? Hi, I wonder does this repo have SSL pretrained weights for Resnet-18? It would be very useful to have weights with...

Feature

Hi, thanks for the great work! I wonder how can I prompt BLIP2 and InstructBLIP to do few-shot in-context learning, e.g. few-shot VQA. Specifically, I want to have the input...

Hi, thanks again for the nice work! I was trying to reproduce the experiments in VQAv2 using your pretrained weights and evaluate using this [repo](https://github.com/GT-Vision-Lab/VQA) mentioned in the paper. However,...

Hi, thanks for the nice work! I wonder what are the main modifications in `llava/train/transformers_replace` compared to the original implementation in `transformers==4.31.0`, as specified in the pyproject.toml. Also, in environment_setup.sh,...

Hi, thanks for the nice work! I tried to use the following code to enable LM-Infinite for Llama following Readme, ```python model = LlamaForCausalLM.from_pretrained('meta-llama/Llama-2-7b-chat-hf', torch_dtype=torch.bfloat16, device_map="cuda", low_cpu_mem_usage=True) from models.llama import...

Hi, congrats on the nice work! I see in the paper you mentioned the resources for training SEED-LLama. I wonder what resources (how many GPUs and how many hours) are...

MIRB: arxiv.org/abs/2406.12742