[Bug]: password setup in .env doesn't work after setup config.yaml path in docker-conpose.yml
What happened?
A bug happened! as the title
it's my docker-compose.yml, the part relevant to config.yaml is setup suggestted by instruction
version: "3.11" services: litellm: build: context: . args: target: runtime image: ghcr.io/berriai/litellm:main-stable volumes: - ./config.yaml:/app/config.yaml command: - "--config=/app/config.yaml" ports: - "4000:4000" # Map the container port to the host, change the host port if necessary environment: DATABASE_URL: "postgresql://llmproxy:dbpassword9090@db:5432/litellm" STORE_MODEL_IN_DB: "True" # allows adding models to proxy via UI env_file: - .env # Load local .env file
my .env file is:
LITELLM_MASTER_KEY="^^^^^" LITELLM_SALT_KEY="*****"
MISTRAL_API_KEY="^^^^^^^^^I"
my config.yaml is:
model_list:
- model_name: Codestral litellm_params: model: mistral/Codestral api_key: "os.environ/MISTRAL_API_KEY"
litellm_settings:
set_verbose: True # Uncomment this if you want to see verbose logs; not recommended in production
drop_params: True
max_budget: 100
budget_duration: 30d
num_retries: 5 request_timeout: 600 telemetry: False context_window_fallbacks: [{"gpt-3.5-turbo": ["gpt-3.5-turbo-large"]}] default_team_settings: - team_id: team-1 success_callback: ["langfuse"] failure_callback: ["langfuse"] langfuse_public_key: os.environ/LANGFUSE_PROJECT1_PUBLIC # Project 1 langfuse_secret: os.environ/LANGFUSE_PROJECT1_SECRET # Project 1 - team_id: team-2 success_callback: ["langfuse"] failure_callback: ["langfuse"] langfuse_public_key: os.environ/LANGFUSE_PROJECT2_PUBLIC # Project 2 langfuse_secret: os.environ/LANGFUSE_PROJECT2_SECRET # Project 2 langfuse_host: https://us.cloud.langfuse.com
For /fine_tuning/jobs endpoints
finetune_settings:
- custom_llm_provider: azure api_base: https://exampleopenaiendpoint-production.up.railway.app api_key: fake-key api_version: "2023-03-15-preview"
- custom_llm_provider: openai api_key: os.environ/OPENAI_API_KEY
for /files endpoints
files_settings:
- custom_llm_provider: azure api_base: https://exampleopenaiendpoint-production.up.railway.app api_key: fake-key api_version: "2023-03-15-preview"
- custom_llm_provider: openai api_key: os.environ/OPENAI_API_KEY
router_settings: routing_strategy: usage-based-routing-v2 redis_host: os.environ/REDIS_HOST redis_password: os.environ/REDIS_PASSWORD redis_port: os.environ/REDIS_PORT enable_pre_call_checks: true model_group_alias: {"my-special-fake-model-alias-name": "fake-openai-endpoint-3"}
general_settings: master_key: sk-1234 # [OPTIONAL] Use to enforce auth on proxy. See - https://docs.litellm.ai/docs/proxy/virtual_keys store_model_in_db: True proxy_budget_rescheduler_min_time: 60 proxy_budget_rescheduler_max_time: 64 proxy_batch_write_at: 1 database_connection_pool_limit: 10
database_url: "postgresql://:@:/" # [OPTIONAL] use for token-based auth to proxy
pass_through_endpoints: - path: "/v1/rerank" # route you want to add to LiteLLM Proxy Server target: "https://api.cohere.com/v1/rerank" # URL this route should forward requests to headers: # headers to forward to this URL content-type: application/json # (Optional) Extra Headers to pass to this endpoint accept: application/json forward_headers: True
environment_variables:
settings for using redis caching
REDIS_HOST: redis-16337.c322.us-east-1-2.ec2.cloud.redislabs.com
REDIS_PORT: "16337"
REDIS_PASSWORD:
Relevant log output
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_max_budget" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_id" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_name" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_info" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_spend" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_aliases" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | /usr/local/lib/python3.11/site-packages/pydantic/_internal/_fields.py:160: UserWarning: Field "model_group" has conflict with protected namespace "model_".
litellm-1 |
litellm-1 | You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
litellm-1 | warnings.warn(
litellm-1 | Prisma schema loaded from schema.prisma
litellm-1 | Datasource "client": PostgreSQL database "litellm", schema "public" at "db:5432"
litellm-1 |
litellm-1 | 🚀 Your database is now in sync with your Prisma schema. Done in 1.55s
litellm-1 |
Running generate... - Prisma Client Python (v0.11.0)
litellm-1 |
litellm-1 | Some types are disabled by default due to being incompatible with Mypy, it is highly recommended
litellm-1 | to use Pyright instead and configure Prisma Python to use recursive types. To re-enable certain types:
litellm-1 |
litellm-1 | generator client {
litellm-1 | provider = "prisma-client-py"
litellm-1 | recursive_type_depth = -1
litellm-1 | }
litellm-1 |
litellm-1 | If you need to use Mypy, you can also disable this message by explicitly setting the default value:
litellm-1 |
litellm-1 | generator client {
litellm-1 | provider = "prisma-client-py"
litellm-1 | recursive_type_depth = 5
litellm-1 | }
litellm-1 |
litellm-1 | For more information see: https://prisma-client-py.readthedocs.io/en/stable/reference/limitations/#default-type-limitations
litellm-1 |
✔ Generated Prisma Client Python (v0.11.0) to ./../../prisma in 443ms
litellm-1 |
litellm-1 | INFO: Started server process [1]
litellm-1 | INFO: Waiting for application startup.
litellm-1 | INFO: Application startup complete.
litellm-1 | INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
litellm-1 |
litellm-1 | #------------------------------------------------------------#
litellm-1 | # #
litellm-1 | # 'I get frustrated when the product...' #
litellm-1 | # https://github.com/BerriAI/litellm/issues/new #
litellm-1 | # #
litellm-1 | #------------------------------------------------------------#
litellm-1 |
litellm-1 | Thank you for using LiteLLM! - Krrish & Ishaan
litellm-1 |
litellm-1 |
litellm-1 |
litellm-1 | Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
litellm-1 |
litellm-1 |
litellm-1 | LiteLLM: Proxy initialized with Config, Set models:
litellm-1 | Codestral
litellm-1 | Alerting: Initializing Weekly/Monthly Spend Reports
litellm-1 | INFO: 192.168.88.6:57226 - "GET / HTTP/1.1" 200 OK
litellm-1 | INFO: 192.168.88.6:57226 - "GET /openapi.json HTTP/1.1" 200 OK
litellm-1 | INFO: 192.168.88.6:57226 - "GET /sso/key/generate HTTP/1.1" 200 OK
litellm-1 | INFO: 192.168.88.6:57226 - "POST /login HTTP/1.1" 401 Unauthorized
litellm-1 | INFO: 172.25.0.4:53830 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:49744 - "GET /metrics HTTP/1.1" 404 Not Found
prometheus-1 | ts=2024-10-22T06:01:05.320Z caller=main.go:645 level=info msg="Starting Prometheus Server" mode=server version="(version=2.54.1, branch=HEAD, revision=e6cfa720fbe6280153fab13090a483dbd40bece3)"
prometheus-1 | ts=2024-10-22T06:01:05.320Z caller=main.go:650 level=info build_context="(go=go1.22.6, platform=linux/amd64, user=root@812ffd741951, date=20240827-10:56:41, tags=netgo,builtinassets,stringlabels)"
prometheus-1 | ts=2024-10-22T06:01:05.320Z caller=main.go:651 level=info host_details="(Linux 6.1.0-23-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.99-1 (2024-07-15) x86_64 96580094d3d5 (none))"
prometheus-1 | ts=2024-10-22T06:01:05.320Z caller=main.go:652 level=info fd_limits="(soft=1048576, hard=1048576)"
prometheus-1 | ts=2024-10-22T06:01:05.320Z caller=main.go:653 level=info vm_limits="(soft=unlimited, hard=unlimited)"
prometheus-1 | ts=2024-10-22T06:01:05.339Z caller=web.go:571 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090
prometheus-1 | ts=2024-10-22T06:01:05.340Z caller=main.go:1160 level=info msg="Starting TSDB ..."
prometheus-1 | ts=2024-10-22T06:01:05.341Z caller=repair.go:56 level=info component=tsdb msg="Found healthy block" mint=1729525347463 maxt=1729533600000 ulid=01JARMYMRH9Q6FP0SSDE64KAWA
prometheus-1 | ts=2024-10-22T06:01:05.341Z caller=repair.go:56 level=info component=tsdb msg="Found healthy block" mint=1729555200000 maxt=1729562400000 ulid=01JAS56624JRGPAAC6X7KXGZKR
prometheus-1 | ts=2024-10-22T06:01:05.341Z caller=repair.go:56 level=info component=tsdb msg="Found healthy block" mint=1729562400000 maxt=1729569600000 ulid=01JASAVNTT61QBYF39ZXZQ5BEE
prometheus-1 | ts=2024-10-22T06:01:05.341Z caller=repair.go:56 level=info component=tsdb msg="Found healthy block" mint=1729533603346 maxt=1729555200000 ulid=01JASAVP7Q3GXYVYMB0F707930
prometheus-1 | ts=2024-10-22T06:01:05.342Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090
prometheus-1 | ts=2024-10-22T06:01:05.342Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090
prometheus-1 | ts=2024-10-22T06:01:05.351Z caller=head.go:626 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
prometheus-1 | ts=2024-10-22T06:01:05.351Z caller=head.go:713 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=11.724µs
prometheus-1 | ts=2024-10-22T06:01:05.351Z caller=head.go:721 level=info component=tsdb msg="Replaying WAL, this may take a while"
prometheus-1 | ts=2024-10-22T06:01:05.352Z caller=head.go:758 level=info component=tsdb msg="WAL checkpoint loaded"
prometheus-1 | ts=2024-10-22T06:01:05.353Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=13 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.353Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=14 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.354Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=15 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.355Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=16 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.356Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=17 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.356Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=18 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.360Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=19 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.360Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=20 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.379Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=21 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.383Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=22 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.384Z caller=head.go:793 level=info component=tsdb msg="WAL segment loaded" segment=23 maxSegment=23
prometheus-1 | ts=2024-10-22T06:01:05.384Z caller=head.go:830 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=909.212µs wal_replay_duration=31.540947ms wbl_replay_duration=171ns chunk_snapshot_load_duration=0s mmap_chunk_replay_duration=11.724µs total_replay_duration=32.492216ms
prometheus-1 | ts=2024-10-22T06:01:05.388Z caller=main.go:1181 level=info fs_type=EXT4_SUPER_MAGIC
prometheus-1 | ts=2024-10-22T06:01:05.388Z caller=main.go:1184 level=info msg="TSDB started"
prometheus-1 | ts=2024-10-22T06:01:05.388Z caller=main.go:1367 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
prometheus-1 | ts=2024-10-22T06:01:05.389Z caller=main.go:1404 level=info msg="updated GOGC" old=100 new=75
prometheus-1 | ts=2024-10-22T06:01:05.390Z caller=main.go:1415 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.105527ms db_storage=1.372µs remote_storage=2.134µs web_handler=720ns query_engine=5.857µs scrape=405.313µs scrape_sd=40.722µs notify=1.034µs notify_sd=971ns rules=1.685µs tracing=10.587µs
prometheus-1 | ts=2024-10-22T06:01:05.390Z caller=main.go:1145 level=info msg="Server is ready to receive web requests."
prometheus-1 | ts=2024-10-22T06:01:05.390Z caller=manager.go:164 level=info component="rule manager" msg="Starting rule manager..."
db-1 | The files belonging to this database system will be owned by user "postgres".
db-1 | This user must also own the server process.
db-1 |
db-1 | The database cluster will be initialized with locale "en_US.utf8".
db-1 | The default database encoding has accordingly been set to "UTF8".
db-1 | The default text search configuration will be set to "english".
db-1 |
db-1 | Data page checksums are disabled.
db-1 |
db-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db-1 | creating subdirectories ... ok
db-1 | selecting dynamic shared memory implementation ... posix
db-1 | selecting default "max_connections" ... 100
db-1 | selecting default "shared_buffers" ... 128MB
db-1 | selecting default time zone ... Etc/UTC
db-1 | creating configuration files ... ok
db-1 | running bootstrap script ... ok
db-1 | performing post-bootstrap initialization ... ok
db-1 | initdb: warning: enabling "trust" authentication for local connections
db-1 | initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
db-1 | syncing data to disk ... ok
db-1 |
db-1 |
db-1 | Success. You can now start the database server using:
db-1 |
db-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
db-1 |
db-1 | waiting for server to start....2024-10-22 06:01:06.488 UTC [56] LOG: starting PostgreSQL 17.0 (Debian 17.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db-1 | 2024-10-22 06:01:06.510 UTC [56] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-10-22 06:01:06.588 UTC [59] LOG: database system was shut down at 2024-10-22 06:01:05 UTC
db-1 | 2024-10-22 06:01:06.615 UTC [56] LOG: database system is ready to accept connections
db-1 | done
db-1 | server started
db-1 | CREATE DATABASE
db-1 |
db-1 |
db-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db-1 |
db-1 | waiting for server to shut down...2024-10-22 06:01:07.090 UTC [56] LOG: received fast shutdown request
db-1 | .2024-10-22 06:01:07.121 UTC [56] LOG: aborting any active transactions
db-1 | 2024-10-22 06:01:07.125 UTC [56] LOG: background worker "logical replication launcher" (PID 62) exited with exit code 1
db-1 | 2024-10-22 06:01:07.126 UTC [57] LOG: shutting down
db-1 | 2024-10-22 06:01:07.146 UTC [57] LOG: checkpoint starting: shutdown immediate
db-1 | 2024-10-22 06:01:07.174 UTC [76] FATAL: the database system is shutting down
db-1 | 2024-10-22 06:01:07.481 UTC [57] LOG: checkpoint complete: wrote 921 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.060 s, sync=0.179 s, total=0.356 s; sync files=301, longest=0.125 s, average=0.001 s; distance=4238 kB, estimate=4238 kB; lsn=0/1908978, redo lsn=0/1908978
db-1 | 2024-10-22 06:01:07.489 UTC [56] LOG: database system is shut down
db-1 | done
db-1 | server stopped
db-1 |
db-1 | PostgreSQL init process complete; ready for start up.
db-1 |
db-1 | 2024-10-22 06:01:07.592 UTC [1] LOG: starting PostgreSQL 17.0 (Debian 17.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db-1 | 2024-10-22 06:01:07.592 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-1 | 2024-10-22 06:01:07.592 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-1 | 2024-10-22 06:01:07.665 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-10-22 06:01:07.751 UTC [80] LOG: database system was shut down at 2024-10-22 06:01:07 UTC
db-1 | 2024-10-22 06:01:07.787 UTC [1] LOG: database system is ready to accept connections
litellm-1 | INFO: 172.25.0.4:43660 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:60566 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:46824 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:55334 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:52780 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:54064 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:57438 - "GET /metrics HTTP/1.1" 404 Not Found
litellm-1 | INFO: 172.25.0.4:49218 - "GET /metrics HTTP/1.1" 404 Not Found
Twitter / LinkedIn details
No response
sometimes litellm exit with code 137
Having the same issue, im suspecting that the URL of prometheus is maybe referenced as localhost and not using the service name - since its not on network host, those URLs should fail.
Maybe someone else can confirm
EDIT: doubt this to be the issue, since the client itself is down
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.