litellm
litellm copied to clipboard
[Bug]: `This feature is only available for LiteLLM Enterprise users` error while trying to update a virtual key
What happened?
An This feature is only available for LiteLLM Enterprise users error occures even when no enterprice features are used when i try to edit an already created virtual key by e.g. adding a model to the key.
In earlier version of litellm i could add models to keys without any problems even without an enterpice license and according to your website it should still be possible.
I attached a video showcasing this behaviour.
https://github.com/user-attachments/assets/70930e7c-b5c3-46c3-bd41-03d637aa57bf
Relevant log output
2025-10-06 13:54:04 11:54:04 - LiteLLM Proxy:ERROR: key_management_endpoints.py:1233 - litellm.proxy.proxy_server.update_key_fn(): Exception occured - 403: {'error': 'This feature is only available for LiteLLM Enterprise users. You must be a LiteLLM Enterprise user to use this feature. If you have a license please set `LITELLM_LICENSE` in your env. Get a 7 day trial key here: https://www.litellm.ai/enterprise#trial. \nPricing: https://www.litellm.ai/#pricing'}
2025-10-06 13:54:04 Traceback (most recent call last):
2025-10-06 13:54:04 File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/key_management_endpoints.py", line 1189, in update_key_fn
2025-10-06 13:54:04 non_default_values = await prepare_key_update_data(
2025-10-06 13:54:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-06 13:54:04 data=data, existing_key_row=existing_key_row
2025-10-06 13:54:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-06 13:54:04 )
2025-10-06 13:54:04 ^
2025-10-06 13:54:04 File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/key_management_endpoints.py", line 967, in prepare_key_update_data
2025-10-06 13:54:04 _set_object_metadata_field(
2025-10-06 13:54:04 ~~~~~~~~~~~~~~~~~~~~~~~~~~^
2025-10-06 13:54:04 object_data=data,
2025-10-06 13:54:04 ^^^^^^^^^^^^^^^^^
2025-10-06 13:54:04 field_name=field,
2025-10-06 13:54:04 ^^^^^^^^^^^^^^^^^
2025-10-06 13:54:04 value=getattr(data, field),
2025-10-06 13:54:04 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-10-06 13:54:04 )
2025-10-06 13:54:04 ^
2025-10-06 13:54:04 File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/common_utils.py", line 46, in _set_object_metadata_field
2025-10-06 13:54:04 _premium_user_check()
2025-10-06 13:54:04 ~~~~~~~~~~~~~~~~~~~^^
2025-10-06 13:54:04 File "/usr/lib/python3.13/site-packages/litellm/proxy/utils.py", line 3581, in _premium_user_check
2025-10-06 13:54:04 raise HTTPException(
2025-10-06 13:54:04 ...<4 lines>...
2025-10-06 13:54:04 )
2025-10-06 13:54:04 fastapi.exceptions.HTTPException: 403: {'error': 'This feature is only available for LiteLLM Enterprise users. You must be a LiteLLM Enterprise user to use this feature. If you have a license please set `LITELLM_LICENSE` in your env. Get a 7 day trial key here: https://www.litellm.ai/enterprise#trial. \nPricing: https://www.litellm.ai/#pricing'}
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.77.5
Twitter / LinkedIn details
No response
This issue was fixed in #15184.
Please try it with v1.77.7.
Same issue with v1.77.7 nightly.
I am just trying to remove a model from a specific key.
{'error': 'This feature is only available for LiteLLM Enterprise users: guardrails. You must be a LiteLLM Enterprise user to use this feature. If you have a license please set `LITELLM_LICENSE` in your env. Get a 7 day trial key here: https://www.litellm.ai/enterprise#trial. \nPricing: https://www.litellm.ai/#pricing'}
And logs from the container:
21:05:57 - LiteLLM Proxy:ERROR: key_management_endpoints.py:1462 - litellm.proxy.proxy_server.update_key_fn(): Exception occured - 403: {'error': 'This feature is only available for LiteLLM Enterprise users: guardrails. You must be a LiteLLM Enterprise user to use this feature. If you have a license please set `LITELLM_LICENSE` in your env. Get a 7 day trial key here: https://www.litellm.ai/enterprise#trial. \nPricing: https://www.litellm.ai/#pricing'}
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/key_management_endpoints.py", line 1418, in update_key_fn
non_default_values = await prepare_key_update_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
INFO: 10.42.173.240:54324 - "POST /key/update HTTP/1.1" 403 Forbidden
data=data, existing_key_row=existing_key_row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/key_management_endpoints.py", line 1163, in prepare_key_update_data
_set_object_metadata_field(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
object_data=data,
^^^^^^^^^^^^^^^^^
field_name=field,
^^^^^^^^^^^^^^^^^
value=getattr(data, field),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/lib/python3.13/site-packages/litellm/proxy/management_endpoints/common_utils.py", line 46, in _set_object_metadata_field
_premium_user_check(field_name)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/litellm/proxy/utils.py", line 3590, in _premium_user_check
raise HTTPException(
...<2 lines>...
)
@icsy7867
If you’re running it locally, please try running npm run dev and check it on http://localhost:3000.
@icsy7867 If you’re running it locally, please try running
npm run devand check it onhttp://localhost:3000.
I am riunning the rootless container in kubernetes, so I dont think that applies.
@icsy7867
From the logs, it looks like the backend is running fine — maybe it’s a browser cache issue?
Could you try clearing your browser cache once?
I just pulled the last nightly build: main-v1.77.7-nightly
The bug still happens when i want to update anything in a virtual key.
Makes litellm preaty much unusable, since i cant assign models to virtual keys i have already enrolled in projects.
The problem seams to be caused by guardrails being set to an empty list from prior versions of litellm, e.g. when i want to save my virtual key the following payload is send to the backend:
{
"auto_rotate": false,
"budget_duration": null,
"disabled_callbacks": [],
"guardrails": [],
"key": "64a243ff899b4b244e78cd3334cfcec29f956d15e88c5ad6bdf5c82327eccd86",
"key_alias": "Continue",
"max_budget": 100,
"max_parallel_requests": null,
"metadata": {
"logging": []
},
"models": [
"snowflake-arctic-embed-l-v2.0",
"gpt-4.1",
"gpt-4.1-mini",
"qwen-3",
"gpt-5",
"gpt-5-mini",
"qwen3-next-80b-a3b-thinking",
"qwen3-235b-a22b-2507",
"gpt-oss-120b"
],
"object_permission": {
"mcp_access_groups": [],
"mcp_servers": [],
"object_permission_id": "be5f395e-ff57-4d0c-b7c3-8726a46613aa",
"vector_stores": []
},
"prompts": [],
"rotation_interval": null,
"rpm_limit": null,
"team_id": "a9823afa-3847-4d05-8f88-031a226ebaf4",
"token": "64a243ff899b4b244e78cd3334cfcec29f956d15e88c5ad6bdf5c82327eccd86",
"tpm_limit": null
}
I did never configure any guardrails or used any premium features.
Looking at your backend code:
def prepare_metadata_fields(
data: BaseModel, non_default_values: dict, existing_metadata: dict
) -> dict:
"""
Check LiteLLM_ManagementEndpoint_MetadataFields (proxy/_types.py) for fields that are allowed to be updated
"""
if "metadata" not in non_default_values: # allow user to set metadata to none
non_default_values["metadata"] = existing_metadata.copy()
casted_metadata = cast(dict, non_default_values["metadata"])
data_json = data.model_dump(exclude_unset=True, exclude_none=True)
try:
for k, v in data_json.items():
if k in LiteLLM_ManagementEndpoint_MetadataFields:
if isinstance(v, datetime):
casted_metadata[k] = v.isoformat()
else:
casted_metadata[k] = v
if k in LiteLLM_ManagementEndpoint_MetadataFields_Premium:
from litellm.proxy.utils import _premium_user_check
_premium_user_check(k)
casted_metadata[k] = v
except Exception as e:
verbose_proxy_logger.exception(
"litellm.proxy.proxy_server.prepare_metadata_fields(): Exception occured - {}".format(
str(e)
)
)
non_default_values["metadata"] = casted_metadata
return non_default_values
data.model_dump(exclude_unset=True, exclude_none=True) will contain the guardrails key in that case and trigger the enterprice error.
Maybe empty collections like [] should also be handled as unset to handle these cases?
Potential fix could be usign a pydantic model serializer, i pushed a small example to #15266
I’ll check it tomorrow, but it might just be that npm build wasn’t run.
fixed on 1.77.7
Should have been fixed with this PR - https://github.com/BerriAI/litellm/commit/aca8ae7962b8e72de86c4d51399be81fa977cf7e
i see it's on v1.77.7-nightly. Could this just be a ui build issue?
The commit hash for tag v1.77.7-nightly is https://github.com/BerriAI/litellm/commit/b7ca138a8d083b3e32c4d497280284ad9dce38c3.
Between https://github.com/BerriAI/litellm/commit/aca8ae7962b8e72de86c4d51399be81fa977cf7e (where this fix was included) and https://github.com/BerriAI/litellm/commit/b7ca138a8d083b3e32c4d497280284ad9dce38c3, there were no changes made to litellm/proxy/_experimental/out/, so it seems the UI wasn’t built during that release.
I tried litellm_rc_branch-v1.77.7.dev6 today, still the same issue.
Guardrails is still send as an empty list to the server.
I experience the same when only modifying the budget field, also on litellm_rc_branch-v1.77.7.dev6
Seams to be fixed in v1.77.1.rc.5
In the latest version of litellm: v1.77.7.dev12, this issue has not been fixed
Seams to be fixed in
v1.77.1.rc.5
This version has been fixed, I don't know if it will be merged into the latest stable version
Hello everyone, good evening! I'm facing the same issue mentioned above. When will we have this fix in the stable version of Litellm?
I have also been waiting for the release of the stable version, and it seems like it has been two to three weeks since the release of the stable version
1.79.3 build from git. Issue still present (
any updates @ishaan-jaff?
hi all, not sure why you're seeing this - i'm not able to reproduce this behaviour on the latest version of OSS.
https://github.com/user-attachments/assets/b5097081-9507-4a2c-ad63-7bc9836b6b30
But is still present. 1.77.3 latest. I've just updated
I've tested latest 1.80.0. Bug is still there