mayureshgawai
mayureshgawai
I am using "llama-2-7b-chat.ggmlv3.q2_K.bin" using "LlamaCpp()" in langchain. The process of "Llama.generate: prefix-match hit" repeats itself so many times. But I want answer only once. How can I set this...
> > I am using "llama-2-7b-chat.ggmlv3.q2_K.bin" using "LlamaCpp()" in langchain. The process of "Llama.generate: prefix-match hit" repeats itself so many times. But I want answer only once. How can I...
But in my case, **f16_kv** is by default True. Still I am receiving that regenerating responses of llama.