Kuhn

Results 6 comments of Kuhn

Same here. I don't know what's wrong with it.

I can give you the baidudisk link for you to download if you text me.

> @basteran @amanysalahfattouh @haotian-liu , have you tried using the flag `--pretrain_mm_mlp_adapter` with the path set to `non_lora_trainables.bin` of your finetuned model? I have been facing the same issue as...

now i know it. look at the llava project. you would find the two-stage weight-loading methods. if anyone still don't know, contact me

> hi @dongwhfdyer , > > It seems you already successfully reproduced this project. > > I am still confused about the training procedure. > > * Do we only...

在我的推理过程中,有相当多的“r”。。。不知道为什么 不论是用 lmdeploy,还是用 transformers 关于 8B 的模型截图没有,8B 的模型真的是一堆“r”,甚至我以为 26B 的模型会没有这个现象,但还是存在,如图下所示。