Multimodal-GPT
Multimodal-GPT copied to clipboard
Why is it so stupid?
+1,包括README当中给出的样例都没法正常跑出来
For the language-only chatting, I guess that's mainly because LLaMA cannot process Chinese well. Actually, both the pre-training and fine-tuning phases lack enough Chinese data. I have once tried Alpaca, which also behaves very strangely when I use Chinese in the dialogue.
Maybe the reason is that llama doesn't support chinese well, its english-only chatting is not bad.
+1,it's funny, and I wonder does it load right model.