opencensus-specs
opencensus-specs copied to clipboard
Specify an exporter configuration format
Most non-library OpenCensus user swill have to load some configuration to initialize the exporters. Specify a common configuration format for the exporters so that OpenCensus community can reuse the same configuration file with different tools.
An initial proposal for the configuration is below. We can also provide parsers in each language to setup the known exporters from the configuration file.
exporters:
prometheus:
addr: "localhost:9999"
stackdriver:
project-id: bamboo-cloud-100
openzipkin:
endpoint: "http://localhost:9411/api/v2/spans"
hostport: "server:5454"
name: server
/cc @bogdandrutu @acetechnologist @songy23 @adriancole @dinooliva @odeke-em @g-easy
We also briefly discussed offline that configuration format can live in a well-known location and exporters can be automatically be enabled with the configuration file if it exists. It might be tricky to support this behavior for backends that require pull (e.g. for Prometheus because user code is expected to register a handler and start a server).
I would like to have this as proto file if possible. I think it makes more sense. Also probably good to separate between trace and stats in case someone want SD trace and Prometheus.
Should also declare precedence. As in would the global configuration or one that is part of the application be used if both are found. Or should they be merged.
And yea, best to separate trace and stats. In the erlang lib we've been doing trace exporters and stat reporters in the configuration.
As in would the global configuration or one that is part of the application be used if both are found.
We can have an env variable (e.g. OPENCENSUS_CONFIG) that represents the location of the configuration file, it can default to a known location such as ~/.config/opencensus/exporters.xxx
Each exporter can register itself to the configuration registry and configuration package can use the registered exporters when parsing the file.
@bogdandrutu agree that the underlying model should be protobuf-based. I would argue against textproto as the format (not widely known) and instead for using the yaml -> json -> proto pipeline common in the Kubernetes ecosystem.