router
router copied to clipboard
Add Fleet Detection Plugin
Adds an initial plugin, that loads at startup and emits metrics for three simple cases: cpus, cpu_freq and total_memory. Not sure this is the correct approach for this, especially as this plugin will expand over time so willing to take any pointers in that regard.
Tested via using OpenTelemetry Collector with the following router config and OTEL config
telemetry:
apollo:
experimental_otlp_endpoint: "http://0.0.0.0:4317"
instrumentation:
spans:
mode: "spec_compliant"
receivers:
otlp:
protocols:
grpc:
http:
exporters:
debug:
verbosity: detailed
processors:
batch:
filter:
metrics:
include:
match_type: regexp
metric_names:
- apollo.router.instance.*
service:
pipelines:
metrics:
receivers: [otlp]
processors: [filter]
exporters: [debug]
And produced the following
Checklist
Complete the checklist (and note appropriate exceptions) before the PR is marked ready-for-review.
- [x] Changes are compatible[^1]
- [x] Documentation[^2] completed
- [ ] Performance impact assessed and acceptable
- Tests added and passing[^3]
- [ ] Unit Tests
- [ ] Integration Tests
- [x] Manual Tests
Notes
[^1]: It may be appropriate to bring upcoming changes to the attention of other (impacted) groups. Please endeavour to do this before seeking PR approval. The mechanism for doing this will vary considerably, so use your judgement as to how and when to do this. [^2]: Configuration is an important part of many changes. Where applicable please try to document configuration examples. [^3]: Tick whichever testing boxes are applicable. If you are adding Manual Tests, please document the manual testing (extensively) in the Exceptions.
✅ Docs Preview Ready
No new or changed pages found.
@jonathanrainer, please consider creating a changeset entry in /.changesets/. These instructions describe the process and tooling.
CI performance tests
- [ ] connectors-const - Connectors stress test that runs with a constant number of users
- [x] const - Basic stress test that runs with a constant number of users
- [ ] demand-control-instrumented - A copy of the step test, but with demand control monitoring and metrics enabled
- [ ] demand-control-uninstrumented - A copy of the step test, but with demand control monitoring enabled
- [ ] enhanced-signature - Enhanced signature enabled
- [ ] events - Stress test for events with a lot of users and deduplication ENABLED
- [ ] events_big_cap_high_rate - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity
- [ ] events_big_cap_high_rate_callback - Stress test for events with a lot of users, deduplication enabled and high rate event with a big queue capacity using callback mode
- [ ] events_callback - Stress test for events with a lot of users and deduplication ENABLED in callback mode
- [ ] events_without_dedup - Stress test for events with a lot of users and deduplication DISABLED
- [ ] events_without_dedup_callback - Stress test for events with a lot of users and deduplication DISABLED using callback mode
- [ ] extended-reference-mode - Extended reference mode enabled
- [ ] large-request - Stress test with a 1 MB request payload
- [ ] no-tracing - Basic stress test, no tracing
- [ ] reload - Reload test over a long period of time at a constant rate of users
- [ ] step-jemalloc-tuning - Clone of the basic stress test for jemalloc tuning
- [ ] step-local-metrics - Field stats that are generated from the router rather than FTV1
- [ ] step-with-prometheus - A copy of the step test with the Prometheus metrics exporter enabled
- [x] step - Basic stress test that steps up the number of users over time
- [ ] xlarge-request - Stress test with 10 MB request payload
- [ ] xxlarge-request - Stress test with 100 MB request payload
Ok @bnjjj I've run the perf tests, building the router from this branch and further enabling metrics so that we can see these now being emitted. I'll upload the top file for the router because that's probably the most instructive, I see the memory usage goes up by about 200Mi over the entire course of the test but I don't have any baselines so its hard to compare. I'll also attach the router logs. That doesn't seem like a terrible thing to me but it's hard to compare without a baseline.
I have a few further questions as well to push us forward on this:
- We want to ensure that customers can turn this off, and we were thinking of re-using the APOLLO_TELEMETRY_DISABLED env var, are you folks happy with that?
- We'll need to add documentation around what's being collected and how to turn it off, should we do that on this branch too for documentation or is it better to do that separately?
- The CI on this branch seems to be failing because of generating the config schema, and try as I might I can't see how that might be fixed, any pointers?
@bnjjj Ah yes, apologies should have thought of that, have done that below and redid the tests I posted above just so the comparison is easier. From looking at the memory figures it looks like the plugin does increase memory but it's not the constant increase we were seeing before, also the baseline it starts from in the test appears higher in the branched case which presumably isn't plugin related because it won't be running in the early part of the test (I imagine). Have attached the logs again, let me know if you think there's anything we need to worry about
Dev Files otel_dev.log router_dev.log top.router_dev.txt
Branch Files otel_branch.log router_branch.log top.router_branch.txt
@BrynCooke I think this might be ready for another review from you?
@jonathanrainer I've pushed a commit that makes things simpler. In particular activate is now handled uniformly and the signature is changed to ensure that activate will complete.
Please check that you are happy with the change and also do some manual testing to ensure that things still work.