llama-stack
                                
                                 llama-stack copied to clipboard
                                
                                    llama-stack copied to clipboard
                            
                            
                            
                        Default OpenTelemetry sink by ENV variable
🚀 Describe the new functionality needed
Let's support the following two variables for the OTLP endpoint when generating configuration, instead of generating a constant.
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
The latter we can skip if we only want to support http, which I'm ok with.
💡 Why is this needed? What if we don't build it?
Right now, we can select otel via TELEMETRY_SINKS=otel and control its service name via OTEL_SERVICE_NAME. However, you can't change the OTLP endpoint used without writing your own configuration file, as it is hard-coded to not read a variable.
This implies that first timers will have to edit configuration when using the easiest way to start (docker) because in a container localhost isn't going to the host's localhost.
By supporting ENV defaults, they can change config like this:
OTEL_EXPORTER_OTLP_ENDPOINT=http://host.docker.internal:4318
Other thoughts
A workaround is to use --add-host=localhost:host-gateway which can help folks use ollama and otel when both are running on the host docker process like so:
docker run --rm --name llama-stack --tty -p 5000:5000  \
  --add-host=localhost:host-gateway \
  llamastack/distribution-ollama \
 --env INFERENCE_MODEL=meta-llama/Llama-3.2-3B-Instruct \
 --env TELEMETRY_SINKS=otel