open-webui
open-webui copied to clipboard
Additional external endpoint
Is your feature request related to a problem? Please describe. I'd like to be able to add both OpenAI and Mistral as endpoints so I can use either/both for prompts when needed.
Describe the solution you'd like The option to add additional endpoints, eg to OpenAI and Mistral
Describe alternatives you've considered I've tried using liteLLM as the endpoint but it brings with it all the ollama models so every model ends up being listed twice. Filtering out ollama models from liteLLM could also be a solution I suppose.
Thanks for all the work on this project, the speed at which it's developing is amazing!
@davecrab I am working on a PR with a thorough guide in /docs
for setting up LiteLLM alongside Ollama WebUI bringing together the various snippets I've dropped into issue reports like this.
In the meantime here's a sample config.yaml
I use:
model_list:
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo-1106
api_base: https://api.openai.com/v1
api_key: "os.environ/OPENAI_API_KEY"
- model_name: gpt-4-turbo
litellm_params:
model: openai/gpt-4-1106-preview
api_base: https://api.openai.com/v1
api_key: "os.environ/OPENAI_API_KEY"
- model_name: mistral-tiny
litellm_params:
model: mistral/mistral-tiny
api_base: https://api.mistral.ai/v1
api_key: "os.environ/MISTRAL_API_KEY"
- model_name: mistral-small
litellm_params:
model: mistral/mistral-small
api_base: https://api.mistral.ai/v1
api_key: "os.environ/MISTRAL_API_KEY"
- model_name: mistral-medium
litellm_params:
model: mistral/mistral-medium
api_base: https://api.mistral.ai/v1
api_key: "os.environ/MISTRAL_API_KEY"
- model_name: claude-2.1
litellm_params:
model: claude-2.1
api_key: "os.environ/ANTHROPIC_API_KEY"
- model_name: claude-instant-1.2
litellm_params:
model: claude-instant-1.2
api_key: "os.environ/ANTHROPIC_API_KEY"
general_settings:
master_key: "os.environ/MASTER_KEY"
You can store this configuration in a file named config.yaml
. The full documentation for setting up the LiteLLM proxy is here: https://docs.litellm.ai/docs/proxy/configs
Next, you'll need to run Ollama WebUI and LiteLLM using Docker Compose. Here's an example of what the docker-compose.litellm.yaml
file should look like:
version: '3.9'
services:
ollama:
volumes:
- ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: unless-stopped
image: ollama/ollama:latest
ollama-webui:
build:
context: .
args:
OLLAMA_API_BASE_URL: '/ollama/api'
dockerfile: Dockerfile
image: ghcr.io/ollama-webui/ollama-webui:main
container_name: ollama-webui
volumes:
- ollama-webui:/app/backend/data
depends_on:
- ollama
ports:
- ${OLLAMA_WEBUI_PORT-3000}:8080
environment:
- "OLLAMA_API_BASE_URL=http://ollama:11434/api"
- "OPENAI_API_BASE_URL=http://litellm:8000/v1"
- "OPENAI_API_KEY=${LITELLM_API_KEY}" # Set this to whatever you like, except blank
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
litellm:
image: ghcr.io/berriai/litellm:main-latest
container_name: litellm
environment:
- "MASTER_KEY=${LITELLM_API_KEY}" # Set this to whatever you like, except blank
- "OPENAI_API_KEY=${OPENAI_API_KEY}"
- "MISTRAL_API_KEY=${MISTRAL_API_KEY}"
- "ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}"
volumes:
- ./litellm/config.yaml:/app/config.yaml
command: [ "--config", "/app/config.yaml", "--port", "8000" ]
restart: unless-stopped
volumes:
ollama: {}
ollama-webui: {}
Make sure to replace the placeholders for API keys with your actual keys. Also, ensure that the config.yaml
file is located in a directory named litellm
, which should be inside the ollama-webui
directory. Finally, run the Docker Compose file using the command docker-compose -f docker-compose.litellm.yml up
.
By following these steps, you should end up with a cleaner and more organized model list in Ollama WebUI:
Let me know if you have any questions or need further clarification!
Oh this is great, thanks! Will give it a go tomorrow
I'll merge this issue with https://github.com/ollama-webui/ollama-webui/issues/432, Let's continue our discussion there!