PolyOllama icon indicating copy to clipboard operation
PolyOllama copied to clipboard

Run multiple open source large language models concurrently powered by Ollama

PolyOllama

Run multiple same or different open source large language models such as Llama2, Mistral and Gemma in parallel simultaneously powered by Ollama.

Demo

https://github.com/ahmetkca/PolyOllama/assets/74574469/f0084d3c-6223-4f7e-9442-2aa5f79af10d

Instructions to run it locally

You need Ollama installed on your computer.

cmd + k (to open the chat prompt) alt + k (on Windows)

cd backend
bun install
bun run index.ts
cd frontend
bun install
bun run dev

Running in docker containers frontend + (backend + ollama)

On Windows

docker compose -f docker-compose.windows.yml up

On Linux/MacOS

docker compose -f docker-compose.unix.yml up

frontend available at http://localhost:5173

:warning: Still work in progress