activepieces icon indicating copy to clipboard operation
activepieces copied to clipboard

Support Self Hosted Ollama as AI Provider

Open awptechnologies opened this issue 2 years ago • 21 comments

It would be amazing if there was a way to incorporate self hosted llama. Giving users the ability to use ollama gives us the ability to really sculpt activepieces AI responses to our liking.

awptechnologies avatar Nov 30 '23 05:11 awptechnologies

We do have LocalAI piece, @awptechnologies I think that does what you need

https://github.com/mudler/LocalAI

abuaboud avatar Dec 01 '23 13:12 abuaboud

I am closing this issue, feel free to reopen If the question is not answered

abuaboud avatar Dec 01 '23 13:12 abuaboud

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see. If this issue is continuing with the lastest stable version of Activepieces, please open a new issue that references this one.

github-actions[bot] avatar Dec 01 '23 13:12 github-actions[bot]

@abuaboud

LocalAI does NOT support Ollama as a backend sadly. Please reopen this.

Ollama is also a much bigger thing now. So it would be really good to have Ollama supported directly. Basic integration shouldn't be hard to do I feel.

There is already something on your community platforms that went in the right direction I feel:

https://community.activepieces.com/t/ollama-or-groq-or-all-the-other-llms-that-use-the-same-api-format-as-openai/4215

ic4-y avatar Jun 01 '24 19:06 ic4-y

I would like to mention that it is possible to access ollama server with LocalAI piece. Please use the Custom API call like on the attached images. This is only for the /generate endpoint but it shows that working with ollama through LocalAI is possible.

image image image image image


And here is the result

image

MarekSurma avatar Jun 12 '24 08:06 MarekSurma

I would like to mention that it is possible to access ollama server with LocalAI piece.

Yes, that is a workaround I wasn't aware of, if only a bit of a tedious one since you now need to run two services side by side. Having a direct ollama integration via a separate piece would still be great.

ic4-y avatar Jun 16 '24 09:06 ic4-y

I tried the same way you did but I am getting unexpected results. Screenshot 2024-09-11 at 12 05 05 PM

remon-rakibul avatar Sep 11 '24 06:09 remon-rakibul

Frankly the only real solution is to get a separte Ollama piece. No idea if I ever have the time to make a contribution.

In your case however @remon-rakibul this most likely is a server error. You should check the logs to see what is going on.

ic4-y avatar Oct 21 '24 16:10 ic4-y

Yeah this workaround is academically interesting but not terribly useful for a "no code" platform. There are so many OpenAI-compatible API endpoints out there. Maybe the real fix is to add an option to the OpenAI integration to let the user set a non-default URL?

magnus919 avatar Jan 29 '25 01:01 magnus919

Any updates?

orkutmuratyilmaz avatar Feb 25 '25 13:02 orkutmuratyilmaz

I always wonder why projects just don’t default to the OpenAI endpoint as even third parties like Gemini have standardized on it and most local servers and apps expose it

cchance27 avatar Feb 25 '25 14:02 cchance27

+1

hotlong avatar Feb 27 '25 07:02 hotlong

http://host.docker.internal:11434

works for me using openai provider with baseurl changed to that which is pointed to ollama on the macos host.

kaovilai avatar Apr 14 '25 08:04 kaovilai

http://host.docker.internal:11434

works for me using openai provider with baseurl changed to that which is pointed to ollama on the macos host.

How do you do this?

Mihai-CMM avatar Jul 15 '25 14:07 Mihai-CMM

The lack of Ollama (or even LiteLLM) Integration is a real detriment, especially since it is already in competitors like Flowise and N8n. I am not sure how @kaovilai was changing the baseurl for OpenAI (maybe the OPENAI_BASE_URL env variable?) but that didn't seem to work for me.

Like lacking Minio support, shows a lack of concern for self-hosting.

ErroneousBosch avatar Aug 03 '25 17:08 ErroneousBosch

+1

flex-yeongeun avatar Oct 24 '25 06:10 flex-yeongeun

its in the UI somewhere. there's like two or three places to put openai_base_url or its equivalents.. alas I just moved to n8n for now since its more mature in general and fits my self hosted needs good enough within its license restrictions.

kaovilai avatar Oct 24 '25 19:10 kaovilai

i know there's at least one spot for the pieces bits, and another place for "system ai" where its not workflow specific.

kaovilai avatar Oct 24 '25 19:10 kaovilai

custom api call seem to work for me but need workaround Lm-stuido work well ollama need Parse response output below are my configuration

lm-studio Image

ollama (need Parse response output)

Image

VONVONONE avatar Oct 25 '25 09:10 VONVONONE

I think Ollama would be a great addition as a dedicated piece. The current workarounds are messy and not user-friendly, so a proper integration would make Activepieces much more appealing for self-hosted AI workflows and bring it closer to competitors like n8n. It’s definitely worth supporting.

aadarsh-nagrath avatar Oct 29 '25 10:10 aadarsh-nagrath

Hi, for ActivePieces installed on local Docker I used LocalAI (Ask LocalAI) and for connection to Ollama: http://host.docker.internal:11434/v1, with any api key, (ollama for example)

Image

I'm new to this, but I think it should work fine with any OpenAI style API. You can try it.

pironev avatar Nov 13 '25 14:11 pironev