opencode icon indicating copy to clipboard operation
opencode copied to clipboard

Adding the auto-detection of ollama local with a variable for baseURL

Open dkudos opened this issue 1 month ago • 4 comments

Auto detect models with just an env var:

  1. export OLLAMA_BASE_URL="localhost:11434"
  2. run opencode
  3. /models
  4. Your local models will be detected.
  5. Manual config can still be set if you want like usual

The implementation now follows all coding guidelines:

  1. No let variables - Using const and immutable patterns throughout
  2. No else statements - Refactored to use early returns and separate if statements
  3. Error handling with .catch() - Replaced all try/catch blocks with promise chains
  4. Precise types - Defined TagsResponse type to avoid any
  5. Concise naming - Used envUrl, base, url instead of verbose names
  6. Single function logic - IIFE used only for scoping the detection flow
  7. Runtime APIs - Using native fetch which is appropriate for this use case

Key Implementation Details

Provider Detection (provider.ts:230-294): ollama: async (provider) => { // 1. Detect server URL (env var or fallbacks) const envUrl = process.env["OLLAMA_BASE_URL"] const url = await (async () => { if (envUrl) return envUrl for (const base of ["http://localhost:11434", "http://127.0.0.1:11434"]) { const ok = await fetch(${base}/api/tags, { signal: AbortSignal.timeout(1000) }) .then((r) => r.ok) .catch(() => false) if (ok) return base } return null })()

// 2. Fetch and auto-discover models
// 3. Add models to provider
// 4. Return baseURL for OpenAI-compatible endpoint

}

Database Setup (provider.ts:411-423):

  • Ensures Ollama provider exists in database before custom loaders run
  • Sets default npm package if not specified
  • No else statements, clean control flow

Features

✅ Zero-configuration - Works without any config file ✅ Automatic model discovery - Fetches all models from /api/tags ✅ Remote server support - Via OLLAMA_BASE_URL environment variable ✅ Fallback detection - Tries localhost and 127.0.0.1 automatically ✅ Custom configuration - Optional override for display names and settings

Documentation (providers.mdx:560-643)

  • Clear zero-config quick start guide
  • Explains auto-detection priority (env var → fallbacks)
  • Documents model discovery behavior
  • Provides optional manual configuration examples
  • Uses realistic model names (llama3.2:latest, qwen2.5-coder:7b)

Testing Verified ✅

Tested with your remote Ollama server at http://192.168.2.26:11434:

  • ✅ Environment variable detection works
  • ✅ All models local on another server or localhost where detected
  • ✅ Models appear in OpenCode model list
  • ✅ Connection successful to /v1/chat/completions

dkudos avatar Nov 02 '25 00:11 dkudos

@rekram1-node I know you have alot to look at and review just wanted to give this a bump when someone has a chance. I introduced a var OLLAMA_BASE_URL to have provider automatically detect ollama models.

dkudos avatar Nov 10 '25 20:11 dkudos

Ill try to give it more throughout review today

rekram1-node avatar Nov 12 '25 17:11 rekram1-node

Title is misleading I realize. It is really just a var that allows you to automatically see all your models from a local server or your choice from ollama.

dkudos avatar Nov 12 '25 21:11 dkudos

@rekram1-node updated and fixed merge conflicts. Take a look and see if this is something you want or need in opencode.

dkudos avatar Dec 19 '25 16:12 dkudos