[Bug]: custom_llm_provider
What happened?
When using embeddings, this gets logged. Tested with openai/text-embedding-3-small and vertex_ai/text-embedding-005
Relevant log output
15:28:47 - LiteLLM:ERROR: litellm_logging.py:1988 - LiteLLM.LoggingError: [Non-Blocking] Exception occurred while success logging Traceback (most recent call last):
File "/Users/abarahonar/.pyenv/versions/3.10.16/lib/python3.10/site-packages/litellm/litellm_core_utils/litellm_logging.py", line 1920, in async_success_handler
await callback.async_log_success_event(
File "/Users/abarahonar/.pyenv/versions/3.10.16/lib/python3.10/site-packages/litellm/router_strategy/budget_limiter.py", line 368, in async_log_success_event
raise ValueError("custom_llm_provider is required")
ValueError: custom_llm_provider is required
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.67.2
Twitter / LinkedIn details
No response
cc @S1LV3RJ1NX
@abarahonar - can you please help me with steps to reproduce?
The following config and request yields the error custom_llm_provider is required in the server, not in the request. The request is successful, but our logs are filled with these messages
# config.yaml
model_list:
- model_name: text-embedding-3-small
litellm_params:
model: openai/text-embedding-3-small
model_info:
mode: embedding
# sample request
#!/usr/bin/env bash
ENDPOINT="http://localhost:4000"
TOKEN="..."
MODEL=text-embedding-3-small
curl --request POST \
--url "${ENDPOINT}/embeddings" \
--header "Authorization: Bearer ${TOKEN}" \
--header 'Content-Type: application/json' \
--data "{
\"input\": [\"Academia.edu uses\"],
\"model\": \"${MODEL}\",
\"encoding_format\": \"base64\"
}" \
--silent
Did you manage to reproduce it @S1LV3RJ1NX ?
I have this error too.
Looking at the code, it might be a simple fix.
async def async_log_success_event in litellm/router_strategy/budget_limiter.py
custom_llm_provider: str = kwargs.get("litellm_params", {}).get(
"custom_llm_provider", None
)
if custom_llm_provider is None:
raise ValueError("custom_llm_provider is required")
budget_config = self._get_budget_config_for_provider(custom_llm_provider)
if budget_config:
# increment spend for provider
...etc
deployment_budget_config = self._get_budget_config_for_deployment(model_id)
if deployment_budget_config:
# increment spend for specific deployment id
...etc
If custom_llm_provider isn't found, I don't think it should throw an exception. Instead, it should just avoid the block that uses custom_llm_provider.
My suggested fix:
custom_llm_provider: str = kwargs.get("litellm_params", {}).get(
"custom_llm_provider", None
)
if custom_llm_provider is not None:
budget_config = self._get_budget_config_for_provider(custom_llm_provider)
if budget_config:
# increment spend for provider
...etc
deployment_budget_config = self._get_budget_config_for_deployment(model_id)
if deployment_budget_config:
# increment spend for specific deployment id
...etc
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.