Ollama support?
any updates on this?
This would be great. Please develop this :). Any update on this?
Im making this now locally
Hello,
Hope you're doing well guys!
First of all, if you want to use Ollama models, you need to save the model's embeddings in the kb_s2 folder. Step 1 is to generate these embeddings for retrieval purposes.
After that, you'll need to adapt the model's output to match the input format expected by the agents — this is the issue I'm currently working on.
That said, I can share my script with you for loading and integrating Ollama models into the package.
I'd need a more thorough walkthrough to be able to replicate this. Good work though!
You need to set the provider's API key to an empty string.
@Julianvvz https://github.com/SylvainVerdy/Agent-S/tree/ollama_support ( i use ollama serve as command line to run ollama)
My issue is in the manager.py with the method : _generate_dag
This new branch worked for me : https://github.com/SylvainVerdy/Agent-S/blob/ollama_support_working (lot of editions in the core folder etc.. maybe i have to clean the code)
the llm managed to automatically launch Spotify on my Windows computer. Model used : qwen3:latest Model embeddings : mxbai-embed-large:latest
Since Ollama is OpenAI-compatible, it's possible to use gui-agents with the OpenAI provider, a custom base URL, and a dummy API key.