Ollama support
Prerequisites
- [x] I have checked the latest documentation and this feature doesn't exist
- [x] I have searched for similar feature requests and found none
- [x] I have linked to related issues or discussions in the description (if any exist)
Type of Improvement
New Functionality
Proposed Solution
I would like to have a Ollama Invoker
My request is straight forward, just like the other model configurations ollama support would allow user to configure somtehing like:
{
"name": "ollama-llama3.3",
"type": "ollama",
"model": "llama3.3",
"url": "http://localhost:11434"
}
Where model is the exact model name as displayed on the ollama website: https://ollama.com/library/llama3.3
The integration could be made so that the model is automatically downloaded. But that is just a wish.
I can help to implement and test.
Hi @joshendriks 👋🏻
You can use Ollama with Prompty today if you choose the openai type and use as base_url your Ollama URL server + /v1 as endpoint (for example http://localhost:11434/v1). Here you can find an example in my blog (sorry it's in Spanish but the example is in English 😃): https://www.returngis.net/2025/04/como-usar-prompty-con-ollama/
Hope this helps!
Great stuff @0GiS0, thanks a l lot!