濮澍

Results 7 issues of 濮澍

`lm_eval --model local-chat-completions --tasks gpqa_main_cot_zeroshot --model_args model=Qwen/Qwen2-72B-Instruct,base_url=https://api.together.xyz/v1 --output_path ./gpqa/result/Qwen2 --use_cache ./gpqa/cache/Qwen2 --log_samples --limit 10 --gen_kwargs temperature=0.7,max_tokens=8192` Using this command, The Qwen2's result just end sooo weirdly like the image below...

I'm testing my model not in supported VLM, is there any solution or code that I can modify to fit my own model evaluation?

![image](https://github.com/user-attachments/assets/7ed9f432-2ebb-4fe7-8595-cd5ea75a9c43) Three days ago, it's all fine, however, when I rerun this today, things changed, it cannot generate good video now.

I only see multimodal input with text output when using Emu2, any solution to generate text + multi-images?

I've noticed you mentioned interleaved text-image generation only able on instruction-tuned model, would you release in the future? or would you please release some methods of instruction-tuning?

If I want to finetune the model with my own dataset, what file should I modify? and how can I make the model generate multi images in one turn?

Does the model require further finetune? I'm wondering why the playground use a 'for' loop to generate a story