aider icon indicating copy to clipboard operation
aider copied to clipboard

Provide docs on integration with local ollama models

Open cloudbow opened this issue 1 year ago • 1 comments

Issue

Everything is working awesome and I love this product.

Please provide an integration doc on how to use llama3 models which are served locally using ollama serve. There is no clear documentation on how to integrate except the general guideline. When I try to use it aider just delete my whole file and does nothhing which might mean the model I am giving is bad. I would like to have a doc please , on how to integrate the llama3 models

What I tried aider --model ollama/llama3:70b

Did not work , then I tried listing the models like

aider --models ollama

But id not show any llama3 "local models". It may be better to show the difference in output as in local models vs remote models so that developer has the peice of mind that they are using opensource local models hosted on their system.

maybe the following

aider --local-models

In doc as well please add here - https://aider.chat/docs/llms.html

Version and model info

Aider: 0.31.1 Mode; Gemini, Gpt4, Gpt3

cloudbow avatar May 05 '24 05:05 cloudbow

Thanks for trying aider and filing this issue.

I think this section of the docs may meet your needs?

https://aider.chat/docs/llms.html#ollama

paul-gauthier avatar May 05 '24 13:05 paul-gauthier

Thank You! Paul. Yes it works. I used codellama:7b like this . export OLLAMA_API_BASE=http://127.0.0.1:11434 aider --model ollama/codellama:7b --no-auto-commits

cloudbow avatar May 07 '24 07:05 cloudbow