Besides Refact.ai hosting models, support to work with any Ollama managed local models
Hi team, I saw Refact.ai self-hosted version can support 3rd-party APIs by adding URL and KEY. Could you do the similar to support any Ollama managed local model for code-completion / chat / finetuning?
Thanks a lot.
I would love to test it out :) i am looking forward to try deepseek R1 with the refact agent
I set it up in hope to use it with ollama.. but no luck with that :/
I got it running but it doesn't work correctly. i used id ollama/qwen3:30b and didn't use /v1 endpoint for API base url
Have you tried another way - setting up the provider's Ollama in the plugin settings?
Hey @zimdin12 @HeddaZ 👋 If you’d like to add new models or providers, feel free to open a PR — contributions are always welcome! We’ve included a Contributor Guide to help you get started easily.