litellm
litellm copied to clipboard
[Bug]: docker seems to ignore the config file
What happened?
A bug happened!
I was trying to understand why docker seems to ignore the model list I gave in my .yaml file (which is not happening using litellm --config myconfig.yaml) but I think I stumbled upon something most serious:
In the current directory I have a file called "config.yaml" and nothing else.
Here's the output of sudo docker run ghcr.io/berriai/litellm:main-latest --debug --detailed_debug --config thisisnotavalidpath.yaml
sudo docker run ghcr.io/berriai/litellm:main-latest --debug --detailed_debug --config thisisnotavalidpath.yamlsudo docker run ghcr.io/berriai/litellm:main-latest --debug --detailed_debug --config thisisnotavalidpath.yaml
/usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_list" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_group_alias" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_info" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_id" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
INFO: Started server process [1]
INFO: Waiting for application startup.
11:15:56 - LiteLLM Proxy:DEBUG: proxy_server.py:1939 - Loaded config YAML (api_key and environment_variables are not shown):
{
"model_list": [],
"general_settings": {},
"router_settings": {},
"litellm_settings": {}
}
11:15:56 - LiteLLM Router:INFO: router.py:289 - Intialized router with Routing strategy: simple-shuffle
Routing fallbacks: None
Routing context window fallbacks: None
11:15:56 - LiteLLM Proxy:DEBUG: utils.py:36 - INITIALIZING LITELLM CALLBACKS!
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x722c56e56410>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <bound method ProxyLogging.response_taking_too_long_callback of <litellm.proxy.utils.ProxyLogging object at 0x722c55fa9990>>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x722c56e55b10>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x722c54ece310>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x722c56de1a50>>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <bound method Router.deployment_callback_on_failure of <litellm.router.Router object at 0x722c54d88c10>>
11:15:56 - LiteLLM:DEBUG: utils.py:935 - callback: <litellm.proxy.hooks.tpm_rpm_limiter._PROXY_MaxTPMRPMLimiter object at 0x722c54ece390>
11:15:56 - LiteLLM Proxy:DEBUG: proxy_server.py:2953 - prisma_client: None
11:15:56 - LiteLLM Proxy:DEBUG: proxy_server.py:2957 - custom_db_client client - None
11:15:56 - LiteLLM Proxy:DEBUG: proxy_server.py:3008 - custom_db_client client None. Master_key: None
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
This is the exame same output as trying to run from my (valid) config file: sudo docker run ghcr.io/berriai/litellm:main-latest --debug --detailed_debug --config config.yaml
litellm version: 1.34.33
Relevant log output
No response
Twitter / LinkedIn details
No response
@thiswillbeyourgithub you need to mount the config.yaml https://docs.litellm.ai/docs/proxy/deploy
docker run \
-v $(pwd)/litellm_config.yaml:/app/config.yaml \
-e AZURE_API_KEY=d6*********** \
-e AZURE_API_BASE=https://openai-***********/ \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
Oh okay thanks.
I guess I was mislead by docker run ghcr.io/berriai/litellm:main-latest --config your_config.yaml that appears first.
But above all I'm deeply convinced that if the user specified --config to a file that does not exist it should crash instead of assuming default values. I can imagine production environnement specifying the wrong path and don't wanting having docker open unexpected ports etc
@thiswillbeyourgithub to understand the issue - if you pass in a config, and it can't be found, this should raise an error, correct?
Absolutely
I see a similar warning in v1.35.29 without docker. This doesn't occur in v1.35.17. I use LiteLLM inside a Django app.
/Users/toni/Developer/Promptmetheus/promptmetheus-core/env/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/Users/toni/Developer/Promptmetheus/promptmetheus-core/env/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_info" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/Users/toni/Developer/Promptmetheus/promptmetheus-core/env/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
/Users/toni/Developer/Promptmetheus/promptmetheus-core/env/lib/python3.11/site-packages/pydantic/_internal/_fields.py:151: UserWarning: Field "model_info" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
I confirm this behaviour in latest litellm-database image - adding entrypoint: ["litellm", "--config", "/app/config.yaml", "--port", "4000"] to docker compose fixes the problem.
This is the offending line: https://github.com/BerriAI/litellm/blob/main/Dockerfile.database#L76
@thiswillbeyourgithub you need to mount the config.yaml https://docs.litellm.ai/docs/proxy/deploy
docker run \ -v $(pwd)/litellm_config.yaml:/app/config.yaml \ -e AZURE_API_KEY=d6*********** \ -e AZURE_API_BASE=https://openai-***********/ \ -p 4000:4000 \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug
@jumski this is expected behaviour. We don't need the config.yaml to start the proxy.
was there an incorrect tutorial here?
I acknowledge that this is caused by my lack of pratcice with docker and the likes.
That being said, if we "don't need it but received it anyways", it's still an unexpected argument and I think it should not be silently ignored. No?
yup - let's raise a separate issue for that though as i think the original issue was the config being ignored