litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: Proxy doesn't work in the latest version

Open davisuga opened this issue 7 months ago • 4 comments

What happened?

I tried other solutions on previously closed issues, but it didn't work.

Relevant log output


❯ uv init --no-workspace
Initialized project `research-crew`
~/gits/research_crew main ?9                                 Py research-crew 13:21:20
❯ uv venv               
Using CPython 3.12.7
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
~/gits/research_crew main ?9                                 Py research-crew 13:21:23
❯ source .venv/bin/activate
~/gits/research_crew main ?9                                 Py research_crew 13:21:27
❯ uv add "litellm[proxy]"        


Resolved 87 packages in 207ms
      Built litellm-proxy-extras==0.1.17
Prepared 21 packages in 8.80s
Installed 84 packages in 144ms
 + aiohappyeyeballs==2.6.1
 + aiohttp==3.11.18
 + aiosignal==1.3.2
 + annotated-types==0.7.0
 + anyio==4.9.0
 + apscheduler==3.11.0
 + attrs==25.3.0
 + backoff==2.2.1
 + boto3==1.34.34
 + botocore==1.34.162
 + certifi==2025.4.26
 + cffi==1.17.1
 + charset-normalizer==3.4.2
 + click==8.1.8
 + cryptography==43.0.3
 + distro==1.9.0
 + dnspython==2.7.0
 + email-validator==2.2.0
 + fastapi==0.115.12
 + fastapi-sso==0.16.0
 + filelock==3.18.0
 + frozenlist==1.6.0
 + fsspec==2025.3.2
 + gunicorn==23.0.0
 + h11==0.16.0
 + hf-xet==1.1.0
 + httpcore==1.0.9
 + httpx==0.28.1
 + httpx-sse==0.4.0
 + huggingface-hub==0.31.1
 + idna==3.10
 + importlib-metadata==8.7.0
 + jinja2==3.1.6
 + jiter==0.9.0
 + jmespath==1.0.1
 + jsonschema==4.23.0
 + jsonschema-specifications==2025.4.1
 + litellm==1.68.2
 + litellm-proxy-extras==0.1.17
 + markdown-it-py==3.0.0
 + markupsafe==3.0.2
 + mcp==1.5.0
 + mdurl==0.1.2
 + multidict==6.4.3
 + oauthlib==3.2.2
 + openai==1.75.0
 + orjson==3.10.18
 + packaging==25.0
 + propcache==0.3.1
 + pycparser==2.22
 + pydantic==2.11.4
 + pydantic-core==2.33.2
 + pydantic-settings==2.9.1
 + pygments==2.19.1
 + pyjwt==2.10.1
 + pynacl==1.5.0
 + python-dateutil==2.9.0.post0
 + python-dotenv==1.0.1
 + python-multipart==0.0.18
 + pyyaml==6.0.2
 + redis==5.2.1
 + referencing==0.36.2
 + regex==2024.11.6
 + requests==2.32.3
 + rich==13.7.1
 + rpds-py==0.24.0
 + rq==2.3.3
 + s3transfer==0.10.4
 + six==1.17.0
 + sniffio==1.3.1
 + sse-starlette==2.3.4
 + starlette==0.46.2
 + tiktoken==0.9.0
 + tokenizers==0.21.1
 + tqdm==4.67.1
 + typing-extensions==4.13.2
 + typing-inspection==0.4.0
 + tzlocal==5.3.1
 + urllib3==2.4.0
 + uvicorn==0.29.0
 + uvloop==0.21.0
 + websockets==13.1
 + yarl==1.20.0
 + zipp==3.21.0
~/gits/research_crew main ?9                              9s Py research_crew 13:21:43
❯ whereis litellm       
litellm: /Users/davi/gits/research_crew/.venv/bin/litellm
~/gits/research_crew main ?9                                 Py research_crew 13:21:51
❯ litellm --config litellm-config.yaml             
Traceback (most recent call last):
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/litellm/proxy/proxy_cli.py", line 507, in run_server
    from .proxy_server import (
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/litellm/proxy/proxy_server.py", line 224, in <module>
    from litellm.proxy.management_endpoints.internal_user_endpoints import (
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/litellm/proxy/management_endpoints/internal_user_endpoints.py", line 27, in <module>
    from litellm.proxy.hooks.user_management_event_hooks import UserManagementEventHooks
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/litellm/proxy/hooks/user_management_event_hooks.py", line 13, in <module>
    from enterprise.enterprise_callbacks.send_emails.base_email import BaseEmailLogger
ModuleNotFoundError: No module named 'enterprise'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/davi/gits/research_crew/.venv/bin/litellm", line 10, in <module>
    sys.exit(run_server())
             ^^^^^^^^^^^^
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/click/core.py", line 1161, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/click/core.py", line 1082, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/click/core.py", line 788, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/davi/gits/research_crew/.venv/lib/python3.12/site-packages/litellm/proxy/proxy_cli.py", line 519, in run_server
    from proxy_server import (
ModuleNotFoundError: No module named 'proxy_server'

Are you a ML Ops Team?

Yes

What LiteLLM version are you on ?

1.68.2

Twitter / LinkedIn details

No response

davisuga avatar May 10 '25 16:05 davisuga

This was introduced in 1.68.2 and has nothing to do with UV. I keep using 1.68.1 as a workaround.

wsomm avatar May 10 '25 19:05 wsomm

hi @davisuga please try https://docs.litellm.ai/release_notes/v1.69.0-stable

let me know if it persists

ishaan-jaff avatar May 12 '25 16:05 ishaan-jaff

Is this still an issue?

krrishdholakia avatar May 13 '25 04:05 krrishdholakia

For me it is working perfectly now. Can be closed IMO, but I‘m not the OP…

wsomm avatar May 13 '25 07:05 wsomm

@krrishdholakia , here at the Litellm Doc for MCP, https://docs.litellm.ai/docs/mcp#1-define-your-tools-on-under-mcp_servers-in-your-configyaml-file

In the provided config.yaml example, we've defined both model_list and mcp_servers as instructed. After that, we followed the steps to run the code as per this guide.

However, the list_tools response is empty, even though two MCP servers have been correctly configured and each has 2 tools defined and are running. These tools are accessible when using mcpinspector or a LangGraph agent, but not when accessed via litellm_proxy.

INFO: 127.0.0.1:62338 - "POST /mcp/sse/messages?session_id=fae556d85d784be79dc6873038788e50 HTTP/1.1" 200 OK 16:20:07 - LiteLLM:DEBUG: server.py:96 - GLOBAL MCP TOOLS: [] 16:20:07 - LiteLLM:DEBUG: mcp_server_manager.py:74 - SSE SERVER MANAGER LISTING TOOLS 16:20:07 - LiteLLM:DEBUG: server.py:100 - SSE TOOLS: []

config.yaml

model_list:
  - model_name: gpt-4o
    litellm_params:
      model: azure/gpt-4o
      api_base: <REDACTED>
      api_key: <REDACTED>
      base_model: gpt-4o
      
mcp_servers:
  math:
    url: "http://localhost:8000/sse"
  weather1:
    url: "http://localhost:8001/sse"

litellm 1.69.1 litellm-enterprise 0.1.2 litellm-proxy-extras 0.1.20

kjoth avatar May 14 '25 11:05 kjoth

For me it is working perfectly now. Can be closed IMO, but I‘m not the OP…

Which version of Litellm you have used? I have tried with 1.69.2, 1.69.1 and 1.67.0

kjoth avatar May 14 '25 11:05 kjoth

@krrishdholakia , here at the Litellm Doc for MCP, https://docs.litellm.ai/docs/mcp#1-define-your-tools-on-under-mcp_servers-in-your-configyaml-file

In the provided config.yaml example, we've defined both model_list and mcp_servers as instructed. After that, we followed the steps to run the code as per this guide.

However, the list_tools response is empty, even though two MCP servers have been correctly configured and each has 2 tools defined and are running. These tools are accessible when using mcpinspector or a LangGraph agent, but not when accessed via litellm_proxy.

INFO: 127.0.0.1:62338 - "POST /mcp/sse/messages?session_id=fae556d85d784be79dc6873038788e50 HTTP/1.1" 200 OK 16:20:07 - LiteLLM:DEBUG: server.py:96 - GLOBAL MCP TOOLS: [] 16:20:07 - LiteLLM:DEBUG: mcp_server_manager.py:74 - SSE SERVER MANAGER LISTING TOOLS 16:20:07 - LiteLLM:DEBUG: server.py:100 - SSE TOOLS: []

config.yaml

model_list:
  - model_name: gpt-4o
    litellm_params:
      model: azure/gpt-4o
      api_base: <REDACTED>
      api_key: <REDACTED>
      base_model: gpt-4o
      
mcp_servers:
  math:
    url: "http://localhost:8000/sse"
  weather1:
    url: "http://localhost:8001/sse"

litellm 1.69.1 litellm-enterprise 0.1.2 litellm-proxy-extras 0.1.20

@krrishdholakia When we are using the docker image you have reference https://docs.litellm.ai/release_notes/v1.69.0-stable then its working. But via pip we its unable to fidn the pypi version litellm==1.69.0.post1.

So when do this fix will be available in the pypi package?

Sure! Here's a clearer and more polished version of your statement:

Another observation: In the LiteLLM REST API for MCP, the /mcp/tools/call endpoint does not appear to include a placeholder for passing arguments when invoking a specific tool.

Image

kjoth avatar May 14 '25 11:05 kjoth

Which version of Litellm you have used? I have tried with 1.69.2, 1.69.1 and 1.67.0

I made a mistake. Installation was "uv pip install litellm[proxy]". But in my update script there was only "uv pip install --upgrade litellm" (no [proxy]). Changing the update script to "uv pip install --upgrade litellm[proxy]" immediately solved the issue.

My fault. I am using 1.69.0 for now.

wsomm avatar May 14 '25 12:05 wsomm

Fixed in the latest version (1.69.1). Thanks @ishaan-jaff !

davisuga avatar May 19 '25 13:05 davisuga