Bunny
Bunny copied to clipboard
LLaMA3 or LLaMA3-Instruct
Great work! I want to know if your pre-training used LLaMA 3 or LLaMA 3-Instruct.
LLaMA 3 Base Model. Thanks!
Hey, @BAAI-DCAI team,
Any experience to share what is the difference of these two versions? Why choose llama 3 base model but not instruct-tuned model?
Our primary experiments are based on Llama-3-8B. And we then find that using instruct-tuned model would be better. Now we have updated the weights based on Llama-3-8B-Instruct.
Our primary experiments are based on Llama-3-8B. And we then find that using instruct-tuned model would be better. Now we have updated the weights based on Llama-3-8B-Instruct.
Great news! Looking forward to your release of the fine-tuning strategies.
@GewelsJI
Please refer to https://huggingface.co/BAAI/Bunny-v1_0-4B.
The GitHub would be updated soon and we are still working on improving the performance of Bunny-Llama-3-8B-V and Bunny-v1.0-4B. Stay tuned!
That's awesome. Keep attention on your updates. Thanks.
@Isaachhh
Further question is: do you guys wanna play with Gemma models in your codebase?
@GewelsJI
Hi, we conducted some experiments about Bunny-Gemma on mid March and I uploaded the related codes into gemma_temp
branch. Note that the version
and conv_mode
should be gemma
.
But we can't guarantee that it works well now. And we may not release the model weights recently.
Hope this can help you. Feel free to comment if you have further questions.
@Isaachhh
I would like to fine tune Bunny-Llama-3-8B-V on some of my data. Can I use the existing train.py
file or should I wait for better VIT strategy you mentioned in the README.md
thanks for your work
They opened vit in both pretrain and sft actually but didn't opensource the recipe.
They opened vit in both pretrain and sft actually but didn't opensource the recipe.
The strategy only differs in the visual instruction tuning stage. And the vision tower was frozen under pre-training stage.
I [think] was able to finetune the adapter starting from phi2 pretrain weights. Any plan to release those weights for phi3 and llama3 ?
@galleon
We have released that.
@IsaachhhI was not able to find them on 🤗. I am talking abt this BAAI/bunny-pretrain-phi-2-siglip but for phi-3 or llama3 … May be it can be extracted from the full model ? How ?Sent from a small deviceOn 8 May 2024, at 14:50, Isaachhh @.***> wrote: @galleon We have released that.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Thanks for adding that !Sent from a small deviceOn 8 May 2024, at 16:01, Isaachhh @.***> wrote: @galleon Training details of model zoo Screenshot.2024-05-08.at.22.00.13.png (view on web)
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
@Isaachhh hi master, do you have any plan to release the high resolution LLaMA3-based model ?
@berry-ding Thanks for interest. In following weeks, stay tuned!
@berry-ding Hi, we released Bunny-v1.1-Llama-3-8B-V supporting 1152x1152.
Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions.