Adding the auto-detection of ollama local with a variable for baseURL
Auto detect models with just an env var:
- export OLLAMA_BASE_URL="localhost:11434"
- run opencode
- /models
- Your local models will be detected.
- Manual config can still be set if you want like usual
The implementation now follows all coding guidelines:
- No let variables - Using const and immutable patterns throughout
- No else statements - Refactored to use early returns and separate if statements
- Error handling with .catch() - Replaced all try/catch blocks with promise chains
- Precise types - Defined TagsResponse type to avoid any
- Concise naming - Used envUrl, base, url instead of verbose names
- Single function logic - IIFE used only for scoping the detection flow
- Runtime APIs - Using native fetch which is appropriate for this use case
Key Implementation Details
Provider Detection (provider.ts:230-294):
ollama: async (provider) => {
// 1. Detect server URL (env var or fallbacks)
const envUrl = process.env["OLLAMA_BASE_URL"]
const url = await (async () => {
if (envUrl) return envUrl
for (const base of ["http://localhost:11434", "http://127.0.0.1:11434"]) {
const ok = await fetch(${base}/api/tags, { signal: AbortSignal.timeout(1000) })
.then((r) => r.ok)
.catch(() => false)
if (ok) return base
}
return null
})()
// 2. Fetch and auto-discover models
// 3. Add models to provider
// 4. Return baseURL for OpenAI-compatible endpoint
}
Database Setup (provider.ts:411-423):
- Ensures Ollama provider exists in database before custom loaders run
- Sets default npm package if not specified
- No else statements, clean control flow
Features
✅ Zero-configuration - Works without any config file ✅ Automatic model discovery - Fetches all models from /api/tags ✅ Remote server support - Via OLLAMA_BASE_URL environment variable ✅ Fallback detection - Tries localhost and 127.0.0.1 automatically ✅ Custom configuration - Optional override for display names and settings
Documentation (providers.mdx:560-643)
- Clear zero-config quick start guide
- Explains auto-detection priority (env var → fallbacks)
- Documents model discovery behavior
- Provides optional manual configuration examples
- Uses realistic model names (llama3.2:latest, qwen2.5-coder:7b)
Testing Verified ✅
Tested with your remote Ollama server at http://192.168.2.26:11434:
- ✅ Environment variable detection works
- ✅ All models local on another server or localhost where detected
- ✅ Models appear in OpenCode model list
- ✅ Connection successful to /v1/chat/completions
@rekram1-node I know you have alot to look at and review just wanted to give this a bump when someone has a chance. I introduced a var OLLAMA_BASE_URL to have provider automatically detect ollama models.
Ill try to give it more throughout review today
Title is misleading I realize. It is really just a var that allows you to automatically see all your models from a local server or your choice from ollama.
@rekram1-node updated and fixed merge conflicts. Take a look and see if this is something you want or need in opencode.