litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: litellm doesnt "see" the ollama serve on wsl on Windows

Open HyperUpscale opened this issue 1 year ago • 1 comments

What happened?

The issue is - I have ollama running on a wsl, so Dockers, browsers and any other application sees ollama running, only litellm doesn't seem to find it.
I tried different installations litellm and litellm[proxy] also tried with config file (maybe wrong), also try to install litellm on another docker and also on another wsl, another python virtual environment, but regardless - litellm can't find the running ollama service.

image

Always the same persistent problem: Field "model_list" has conflict with protected namespace "model_".

You may be able to resolve this warning by setting model_config['protected_namespaces'] = (). warnings.warn(

        **LiteLLM Warning: proxy started with `ollama` model

ollama serve failed with Exception[WinError 2] The system cannot find the file specified. Ensure you run ollama serve**

What do I need to do on my Windows?

Relevant log output

No response

Twitter / LinkedIn details

No response

HyperUpscale avatar Jan 19 '24 13:01 HyperUpscale

I am getting the same error . Where you able to find a solution?

bvac78 avatar Jan 22 '24 19:01 bvac78

I haven't found a solution yet... ^0_0^

HyperUpscale avatar Jan 24 '24 07:01 HyperUpscale

I really don't get... it is not about the host (even that seem to be the problem if we have many IPs and hostnames) it is something about the defaults.

litellm --model ollama/mistral doesn't work... for unknown reason

While I tested with simple python call - then it works:

from litellm import completion
response = completion(
    model="ollama/mistral", 
    messages=[{ "content": "Tell me in two word who are you? ","role": "user"}], 
)
print(response)

image

Which tells me the problem seem to be in the defaults on the litellm "--model" option. IDK

HyperUpscale avatar Jan 26 '24 09:01 HyperUpscale

TRIED: using the hostname: host.docker.internal

This fix worked on similar problem of another project... on Dialoqbase, but doesn't work on litellm 😂

HyperUpscale avatar Jan 26 '24 12:01 HyperUpscale

I too am having the same problem

Colinw2292 avatar Jan 31 '24 23:01 Colinw2292

hey, i'm the maintainer - i'm not sure what the solution here is, since i don't have a similar environment to repro. If someone discovers the issue, we'd welcome a pr here!

krrishdholakia avatar Feb 01 '24 00:02 krrishdholakia

TRIED: using the hostname: host.docker.internal This fix worked on similar problem of another project... on Dialoqbase, but doesn't work on litellm 😂

I expose the host network to the containers just to make life easier. docker run --net=host ... (if youre running under docker) I also set the default WSL2 networkingMode to "mirrored" https://learn.microsoft.com/en-us/windows/wsl/wsl-config#configuration-settings-for-wslconfig

sudo docker run -d --gpus=all --ipc=host --network=host -v /home/matbee/ollama:/root/.ollama -p 11434:11434 ollama/ollama
(base) matbee@hostname:~/dev/litellm$ cat config.yaml 

model_list:
  - model_name: ollama-codellama
    litellm_params:
      model: ollama/codellama:70b
      api_base: http://0.0.0.0:11434
      rpm: 1440
    model_info: 
      version: 2

litellm_settings:
  drop_params: True
  set_verbose: True


(base) matbee@hostname:~/dev/litellm$ cat docker-compose.yaml 
version: "3.9"
services:
  litellm:
    network_mode: host
    build:
      context: .
      args:
        target: runtime
    image: ghcr.io/berriai/litellm:main-latest
    ports:
      - "8000:8000"
    volumes:
      - ./config.yaml:/app/config.yaml
    command: [ "--config", "/app/config.yaml", "--port", "8000", "--num_workers", "6" ]

matbeedotcom avatar Feb 01 '24 18:02 matbeedotcom