stefan_liu

Results 10 comments of stefan_liu

Could you please give an example for showing how to run your post-quantization code? I have tried your [commit](https://github.com/tgisaturday/pytorch-image-models) like following: ``` from timm.models.quantization import quant_rexnet_100 model = quant_rexnet_100(pretrained=True) ```...

> We are working on BLIP-2 VQA evaluation scripts @yonatanbitton. Meanwhile please refer to our paper for hyperparameters. Hi, thanks for the wonderful work. I have not found the VQAv2...

> The current codebase supports training on these datasets. Hi, If found that current BLIP2 codebase need some modification for VQAv2 and OKVQA fine-tune. The modification I did: - `samples["text_output"]`...

> @kebijuelun Hi, your answer is very useful. Is the parameter modification you mentioned a piece of V100? If I have 8 pieces of 3090, can I fine-tune the FlanT5XL...

> @kebijuelun thank to your response, if I use the 3090 can use the same type "bf16", the blip2_t5.py I will need to change parameters, like"text_ouput"? I don't see any...

> @kebijuelun Hello, I want to ask again, did you fine-tune with eval_okvqa_zeroshot_flant5xl.sh? Do you have a single card or multiple cards, and how much memory do you have? I...

> @kebijuelun Hi, are you also using the prompt suggested by the paper for VQA? yes, I use the prompt from eval script: https://github.com/salesforce/LAVIS/blob/main/lavis/projects/blip2/eval/okvqa_zeroshot_flant5xl_eval.yaml#L41 `prompt: "Question: {} Short answer:"`

> I tried to use your code to fine-tune FLAN-T5-xl, but the loss is always oscillating and never converge. I used the prompt "Question: {} Short Answer:" and inputed the...

> I think the provided code does not support taking the question into the Q-Former. > #198 ![image](https://user-images.githubusercontent.com/29164878/228719951-7ce4e271-8082-461e-94b6-cea7e3716a72.png) Hi, the question is input to LLM in this case. You can...

> Have you tried using torchrun? I tried torchrun, but the DDP can not reduce the vRAM.