--local should respect $OLLAMA_HOST rather than defaulting to localhost
Is your feature request related to a problem? Please describe.
No response
Describe the solution you'd like
$OLLAMA_HOST is the standard way to define your Ollama server URL. It is used by various Ollama clients including the official Ollama command line tool.
At present --local uses a hardcoded api_base of http://localhost:11434
It would be nice if it defaulted to the $OLLAMA_HOST environment variable value (if found).
Describe alternatives you've considered
No response
Additional context
No response
100% agree. Its very unusable when its restricted/limited to localhost. I guess a lot of people run ollama on a system with GPUs thats not their actual localhost. So please add that this works. I tried even with --api_base parameter but run into errors (llm.py and lightllm).
+10 to this one - in fact I run Ollama on 4 PCs... with the heavy models on the ones with better GPUs. It would be great if the client could understand this and route LLM traffic efficiently.