private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

Use Llama3 for PrivateGpt

Open kabelklaus opened this issue 1 year ago • 1 comments

How is it possible to use Llama3 instead of mistral for privatgpt

kabelklaus avatar Apr 25 '24 05:04 kabelklaus

I have used ollama to get the model, using the command line "ollama pull llama3" In the settings-ollama.yaml, I have changed the line llm_model: mistral to llm_model: llama3 # mistral

After restarting private gpt, I get the model displayed in the ui.

dlorenz70 avatar Apr 25 '24 14:04 dlorenz70

I have used ollama to get the model, using the command line "ollama pull llama3" In the settings-ollama.yaml, I have changed the line llm_model: mistral to llm_model: llama3 # mistral

After restarting private gpt, I get the model displayed in the ui.

Apology to ask. Seems like you are hinting which you get the model displayed in the UI. but not actually working? Or have i overinterpretated the statemenet

skyworld2147 avatar May 20 '24 17:05 skyworld2147

Remember that if you decide to use another LLM model in ollama, you have to pull before. ollama pull llama3 After downloading, be sure that Ollama is working as expected. You can check this using this example cURL:

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt":"Why is the sky blue?"
 }'

jaluma avatar Jul 10 '24 11:07 jaluma