mcp_basic_ollama_agent example isn't running, because of invalid API calls?
Hi, I am surprised by the implementation of the Ollama example. Because the class and conf. for 'OpenAIAugmentedLLM' is used here ([...] This implementation uses OpenAI's ChatCompletion as the LLM). As a result, the wrong API endpoints are called, such as http://localhost:11434/chat/completions, which does not match the Ollama API (see https://github.com/ollama/ollama/blob/main/docs/api.mdi)."
How I get the example running (with my local ollama model (llama3.2:1b))?
Regards, Martin
openai:
base_url: "http://localhost:11434/v1"
please check the base_url of mcp_basic_ollama_agent\mcp_agent.config.yaml. the value in the example seems correct.
@builder33807664 can you confirm if you were able to get this working? For some context, the ollama docs suggest to use the base_url as http://localhost:11434/v1. See https://github.com/ollama/ollama/blob/main/docs/openai.md.
I did notice there is a small issue when no api key is specified, so I've just pushed a fix to the example to specify an api key. Please let me know if that was the issue you were encountering. I've confirmed the example works.
Make sure you ran ollama run llama3.2:1b before starting the example, since the ollama server needs to be running.
I got this one
[DEBUG] 2025-03-27T11:04:05 mcp_agent.workflows.llm.augmented_llm_openai.finder - {'model': 'llama3.2:3b', 'messages': [{'role': 'system', 'content': "You are an agent with access to the filesystem, \n ...... [DEBUG] 2025-03-27T11:04:05 mcp_agent.workflows.llm.augmented_llm_openai.finder - Chat in progress { "data": { "progress_action": "Chatting", "model": "llama3.2:3b", "agent_name": "finder", "chat_turn": 3 } } [mcp_agent.workflows.llm.augmented_llm_openai.finder] Error: DNS lookup failed
Do you know why? @saqadri
And I can get the response from direct API call using Bruno(Postman like API tool)
@builder33807664 can you confirm if you were able to get this working? For some context, the ollama docs suggest to use the base_url as http://localhost:11434/v1. See https://github.com/ollama/ollama/blob/main/docs/openai.md.
I did notice there is a small issue when no api key is specified, so I've just pushed a fix to the example to specify an api key. Please let me know if that was the issue you were encountering. I've confirmed the example works.
Make sure you ran
ollama run llama3.2:1bbefore starting the example, since the ollama server needs to be running.
@saqadri it works! Thank's a lot for your support.