litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: ModuleNotFoundError: No module named 'enterprise'

Open whitfin opened this issue 10 months ago • 29 comments

What happened?

Importing the litellm.proxy (and probably other) modules crashes with the latest PyPI release. Worked up until the latest version of the package on PyPI.

Edit: introduced inside https://github.com/BerriAI/litellm/pull/10321.

Relevant log output

>>> import litellm.proxy
Traceback (most recent call last):
  File "<python-input-0>", line 1, in <module>
    import litellm.proxy
  File "/usr/local/lib/python3.13/site-packages/litellm/__init__.py", line 1058, in <module>
    from .responses.main import *
  File "/usr/local/lib/python3.13/site-packages/litellm/responses/main.py", line 13, in <module>
    from litellm.responses.litellm_completion_transformation.handler import (
        LiteLLMCompletionTransformationHandler,
    )
  File "/usr/local/lib/python3.13/site-packages/litellm/responses/litellm_completion_transformation/handler.py", line 8, in <module>
    from litellm.responses.litellm_completion_transformation.streaming_iterator import (
        LiteLLMCompletionStreamingIterator,
    )
  File "/usr/local/lib/python3.13/site-packages/litellm/responses/litellm_completion_transformation/streaming_iterator.py", line 5, in <module>
    from litellm.responses.litellm_completion_transformation.transformation import (
        LiteLLMCompletionResponsesConfig,
    )
  File "/usr/local/lib/python3.13/site-packages/litellm/responses/litellm_completion_transformation/transformation.py", line 9, in <module>
    from enterprise.enterprise_hooks.session_handler import (
    ...<2 lines>...
    )
ModuleNotFoundError: No module named 'enterprise'

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.67.4

Twitter / LinkedIn details

No response

whitfin avatar Apr 27 '25 02:04 whitfin

Similar problem for me.

DavdGao avatar Apr 27 '25 03:04 DavdGao

I've pushed a fix for this, it will be live on 1.67.4.post1

Please pin to 1.67.3 or lower in the meantime.

ishaan-jaff avatar Apr 27 '25 03:04 ishaan-jaff

Note: 1.67.3 does not exist in pip index versions litellm:

pip3 index versions litellm
WARNING: pip index is currently an experimental command. It may be removed/changed in a future release without prior warning.
litellm (1.67.4)
Available versions: 1.67.4, 1.67.2, 1.67.1, 1.67.0.post1, 1.67.0, 1.66.3, 1.66.2, 1.66.1, 1.66.0, 1.65.8, 1.65.7, 1.65.6, 1.65.5, 1.65.4.post1 ...

So I pinned to 1.67.2 instead

MrOrz avatar Apr 27 '25 08:04 MrOrz

I also tried 1.67.3 but it's not working

Naveed101633 avatar Apr 27 '25 09:04 Naveed101633

Same problem here. Solution was to pin to 1.67.2.

mmabrouk avatar Apr 27 '25 10:04 mmabrouk

Is there a solution to this, downgrading is not helping

SHREYASP21 avatar Apr 27 '25 11:04 SHREYASP21

Is there a solution to this, downgrading is not helping

Pinning to 1.67.2 fixes the issue. Make sure you are correctly doing it (maybe you are updating for instance pyproject.toml but not the lock file).

mmabrouk avatar Apr 27 '25 11:04 mmabrouk

Hey, I’ve followed your advice and pinned litellm to version 1.67.2, but the issue still persists. I’m running this in Google Colab, where I simply install the package using pip install litellm==1.67.2 and try to call it immediately.

The ModuleNotFoundError for the enterprise module keeps occurring, and I don’t have any specific enterprise-related functionality in my script. Is there something else I’m missing? Any advice would be appreciated!

SHREYASP21 avatar Apr 27 '25 11:04 SHREYASP21

Damn just when i wanted to use liteLLM it breaks 😂😂😂😂

litellm==1.67.4

has the same issue !

shamalgithub avatar Apr 27 '25 12:04 shamalgithub

Is there a solution to this, downgrading is not helping

Pinning to 1.67.2 fixes the issue. Make sure you are correctly doing it (maybe you are updating for instance pyproject.toml but not the lock file).

1.67.2 is not working, I checked it.

Naveed101633 avatar Apr 27 '25 12:04 Naveed101633

can someone explain what the issue is, I am new to using this tool, I basically pip installed litellm and called it, worked yesterday and isnt working now, what is the issue and can I not call it anymore

SHREYASP21 avatar Apr 27 '25 12:04 SHREYASP21

Pinning to 1.67.2 works here. Used uv to install latest litellm, then told it to downgrade to 1.67.2:

$ uv tool install litellm[proxy]@1.67.2
Resolved 81 packages in 16.77s
      Built litellm-proxy-extras==0.1.11
Prepared 2 packages in 1.44s
Uninstalled 2 packages in 30ms
Installed 2 packages in 33ms
 - litellm==1.67.4
 + litellm==1.67.2
 - litellm-proxy-extras==0.1.12
 + litellm-proxy-extras==0.1.11
Installed 1 executable: litellm
$ litellm --config config.yaml
INFO:     Started server process [3157021]
INFO:     Waiting for application startup.

MoshiBin avatar Apr 27 '25 12:04 MoshiBin

I can confirm that litellm==1.67.2 is working with no enterprise-related errors.

If you are still facing any enterprise errors, consider clearing out your virtual environment (venv, conda, pixi, uv, etc.), and reinstalling.

For those who are new, hang tight, litellm is awesome, these errors sometimes happen, and I'm sure the devs/maintainers (@krrishdholakia and @ishaan-jaff) know to add new integration tests in response.

ericmjl avatar Apr 27 '25 13:04 ericmjl

I can confirm too that downgrading to litellm==1.67.2 worked perfectly. Love your work guys!

abooo96 avatar Apr 27 '25 16:04 abooo96

Lite LLM guys, PLEASE DO NOT BREAK STUFF in MINOR releases. I mean, if you want to move some essential modules to the enterprise, please first mark them somehow and do the official move in 1.68. I am among the numerous people who got everything broken just because of a minor litellm update.

antonkulaga avatar Apr 27 '25 20:04 antonkulaga

I had similar issue with a series of collab notebooks I prepared for a workshop. Not sure how to fix to previous versions but can you give an estimate when a proper release that fixes this bug would be released? Fixing versions doesn't seem to be an option and most of our code uses liteLLM

Livia-Zaharia avatar Apr 27 '25 21:04 Livia-Zaharia

I had the same problem with version 1.67.4. I can also confirm that pinning to a previous version solves the issue: litellm==1.67.2

lorenzocesconetto avatar Apr 27 '25 22:04 lorenzocesconetto

Downgrading to 1.67.2 also worked for uv environment. uv add litellm==1.67.2 --frozen

takeaship avatar Apr 28 '25 02:04 takeaship

Hi everyone, fix is live here: https://pypi.org/project/litellm/1.67.4.post1/

Apologies about the issue, I will be investigating further and posting an update with how we plan on preventing these types of issues and will link to testing improvements I push too.

ishaan-jaff avatar Apr 28 '25 03:04 ishaan-jaff

Lite LLM guys, PLEASE DO NOT BREAK STUFF in MINOR releases. I mean, if you want to move some essential modules to the enterprise, please first mark them somehow and do the official move in 1.68. I am among the numerous people who got everything broken just because of a minor litellm update.

hi @antonkulaga, acknowledging this feedback. I'm sorry about the issue and will do a better job around version upgrades/release conventions when we move files. Thank you for using litellm, we will do a better job on this.

ishaan-jaff avatar Apr 28 '25 03:04 ishaan-jaff

@ishaan-jaff you guys have been awesome with LiteLLM. Keep up the good work!

ericmjl avatar Apr 28 '25 04:04 ericmjl

I've used https://github.com/Arkq/flake8-requirements to validate all imports are part of project requirements, maybe you can add that to the pre-commit toolchain here

jamesbraza avatar Apr 28 '25 04:04 jamesbraza

Same again happens with litellm 1.68.1. So much for not happening again.

pseudotensor avatar May 13 '25 03:05 pseudotensor

We've just moved to using a separate pip package for enterprise (as of v1.69.0) - over the next week, we plan to move imports to use that (safer than expecting a folder to exist when it might not), and will close the issue once that work is completed.

krrishdholakia avatar May 13 '25 03:05 krrishdholakia

hi @pseudotensor do you have a stack trace of the exception ?

ishaan-jaff avatar May 13 '25 03:05 ishaan-jaff

@pseudotensor unable to repro the issue

  • litellm version used 1.69.1 (latest)
  • Here's a colab I tested with: https://colab.research.google.com/drive/1T5hzmK3UCFQ7SBw0JNYzieUgX44DKOoN?usp=sharing

ishaan-jaff avatar May 13 '25 03:05 ishaan-jaff

latest version works. So just another momentary blip.

(litellm) bash-5.2$ litellm --version
Traceback (most recent call last):
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/proxy_cli.py", line 507, in run_server
    from .proxy_server import (
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/proxy_server.py", line 224, in <module>
    from litellm.proxy.management_endpoints.internal_user_endpoints import (
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/management_endpoints/internal_user_endpoints.py", line 27, in <module>
    from litellm.proxy.hooks.user_management_event_hooks import UserManagementEventHooks
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/hooks/user_management_event_hooks.py", line 13, in <module>
    from enterprise.enterprise_callbacks.send_emails.base_email import BaseEmailLogger
ModuleNotFoundError: No module named 'enterprise'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/workspace/miniconda3/envs/litellm/bin/litellm", line 8, in <module>
    sys.exit(run_server())
             ^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/click/core.py", line 1161, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/click/core.py", line 1082, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/click/core.py", line 1443, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/click/core.py", line 788, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/proxy_cli.py", line 519, in run_server
    from proxy_server import (
ModuleNotFoundError: No module named 'proxy_server'
(litellm) bash-5.2$ 

Can't even show the version due to the failure.

But:

(litellm) bash-5.2$ ls -alrt ~/miniconda3/envs/litellm/lib/python3.11/site-packages/|grep lite
drwxr-xr-x   2 root root    4096 May  9 20:03 litellm_proxy_extras-0.1.17.dist-info
drwxr-xr-x   4 root root    4096 May  9 20:03 litellm_proxy_extras
drwxr-xr-x  23 root root    4096 May  9 20:03 litellm
drwxr-xr-x   2 root root    4096 May  9 20:03 litellm-1.68.2.dist-info
(litellm) bash-5.2$ 

So also litellm 1.68.2 had issue.

pseudotensor avatar May 13 '25 03:05 pseudotensor

Re-reading the error - this looks like it's been fixed since v1.67.4

File "/usr/local/lib/python3.13/site-packages/litellm/responses/litellm_completion_transformation/transformation.py", line 9, in from enterprise.enterprise_hooks.session_handler import ( ...<2 lines>... )

This line no longer exists in our codebase. It's now in a try-except.

Commit: https://github.com/BerriAI/litellm/commit/1e0ee2244543ff2fea0414467c71aca7813bde07#diff-ab79fe7e44da7e4b6266450180d908ea8d7dd6599bec809ef25d2bb5f31d4077

Image

krrishdholakia avatar May 13 '25 03:05 krrishdholakia

File "/workspace/miniconda3/envs/litellm/lib/python3.11/site-packages/litellm/proxy/hooks/user_management_event_hooks.py", line 13, in from enterprise.enterprise_callbacks.send_emails.base_email import BaseEmailLogger ModuleNotFoundError: No module named 'enterprise'

That's a separate import - which got moved in v1.69.0.

I believe there's probably some additional unaddressed cases for direct imports which we need to handle - will link the PR to this issue

krrishdholakia avatar May 13 '25 03:05 krrishdholakia

I've experimented with several of the versions mentioned in this thread, and I've noticed that attempting to use logfire callback is the culprit. litellm works fine if I don't try to integrate it with logfire.

quinn-dougherty avatar May 22 '25 20:05 quinn-dougherty