Chting
Chting
Yes, I set 100G SWP, I don't know if 65B is enough
> I recommend using this. https://github.com/randaller/llama-chat Thank you for your answer. I have used this one. It does not support multiple GPUs at present
> [Check this](https://github.com/ggerganov/llama.cpp), Maybe helpful. Thank you, I have tried this
Why can't I talk like chatgpt after executing the command according to the prompt
> error fix ggml.h add #define _POSIX_C_SOURCE 199309L  Thank you very much
> The original Meta's repo works with A100s as well. It stipulates that 65B must use 8GPU
> I'm also trying to figure out how to run with 2 gpus If you succeed, please tell me
> @Chting @wgimperial @fmeres now you may try to run HF version on a more than one GPU's. Thank you very much
> 坏了,昨天开始,bot的回复成这样了  账号被ban了
> 可以啊,你`人格设置`设置一下就行 比过去的3难搞了,他会回答 我作为一个AI语言模型。并没有真正的身份和个性.我在代码里默认人格设置为高启强,重启以后,老版本的可以,但是新版本的就不行🤔