aprilehannibal
aprilehannibal
> > Hi @Unrealluver Are you still facing this issue? > > Thanks, I have found the reason. I'm facing the same issue. Could you please share how you resolved...
I wonder the LLM you used for pretraining is the "lmsys/vicuna-7b-delta-v1.1" or the origin "llama 7b" weight. @haotian-liu
OK, got it! Thanks a lot!
ok, thanks
> If you want to reproduce our results on the paper (e.g. ScienceQA), I would recommend using V0 first, as it is the model we used to train/evaluate the numbers...