vector
vector copied to clipboard
feat(distribution/systemd): add [email protected] and [email protected]
Hello :wave: ,
For production reasons, I had to create systemd template service to create a separation between an instance of vector that run multiple workloads that do not interact. I propose this modification here.
Have a nice day.
Deploy Preview for vrl-playground canceled.
| Name | Link |
|---|---|
| Latest commit | 70da1b7164fd042c906c1c44b468a96f51a0d5fc |
| Latest deploy log | https://app.netlify.com/sites/vrl-playground/deploys/64d9d98c36f2830008de9277 |
Deploy Preview for vector-project canceled.
| Name | Link |
|---|---|
| Latest commit | 70da1b7164fd042c906c1c44b468a96f51a0d5fc |
| Latest deploy log | https://app.netlify.com/sites/vector-project/deploys/64d9d98ccc31f900089815ec |
@spencergilbert I have updated the systemd units with all your comments. Let me know, if you want me to do another modifications.
I will update the packaging as well. I maintain the packaging for exherbo distribution which is a fork of gentoo. The manifest is here https://gitlab.exherbo.org/exherbo-unofficial/CleverCloud/-/tree/master/packages/sys-apps/vector
The earlier CI failure is a problem with the job itself, not the changes - I'll merge in the fix once we have it.
Regression Detector Results
Run ID: 07f9c5fa-410d-440b-9199-55281b599eb7
Baseline: d2bc9ddb69ba7769288f78cb60c3000809253843
Comparison: 0a9d50430a7dd0cb5bf67f57c0dfb7f9cf1cef38
Total vector CPUs: 7
Explanation
A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.
Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.
We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
-
The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.
-
Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.
No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.
Fine details of change detection per experiment.
| experiment | goal | Δ mean % | Δ mean % CI | confidence |
|---|---|---|---|---|
| datadog_agent_remap_blackhole_acks | ingress throughput | +1.35 | [+1.25, +1.46] | 100.00% |
| syslog_loki | ingress throughput | +1.16 | [+1.07, +1.25] | 100.00% |
| datadog_agent_remap_datadog_logs_acks | ingress throughput | +1.01 | [+0.90, +1.11] | 100.00% |
| file_to_blackhole | egress throughput | +0.92 | [-2.79, +4.62] | 24.91% |
| otlp_http_to_blackhole | ingress throughput | +0.89 | [+0.76, +1.03] | 100.00% |
| syslog_regex_logs2metric_ddmetrics | ingress throughput | +0.60 | [+0.28, +0.92] | 98.37% |
| socket_to_socket_blackhole | ingress throughput | +0.44 | [+0.39, +0.49] | 100.00% |
| datadog_agent_remap_datadog_logs | ingress throughput | +0.31 | [+0.21, +0.41] | 99.99% |
| otlp_grpc_to_blackhole | ingress throughput | +0.18 | [+0.07, +0.29] | 96.21% |
| http_to_http_acks | ingress throughput | +0.15 | [-1.11, +1.41] | 12.16% |
| enterprise_http_to_http | ingress throughput | +0.03 | [-0.00, +0.06] | 74.01% |
| splunk_hec_to_splunk_hec_logs_acks | ingress throughput | +0.01 | [-0.06, +0.07] | 12.42% |
| fluent_elasticsearch | ingress throughput | +0.00 | [-0.00, +0.00] | 44.47% |
| splunk_hec_indexer_ack_blackhole | ingress throughput | +0.00 | [-0.04, +0.04] | 0.05% |
| splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.02 | [-0.06, +0.03] | 37.60% |
| http_to_http_json | ingress throughput | -0.02 | [-0.06, +0.02] | 50.94% |
| http_to_http_noack | ingress throughput | -0.03 | [-0.09, +0.03] | 47.81% |
| http_text_to_http_json | ingress throughput | -0.45 | [-0.52, -0.39] | 100.00% |
| syslog_splunk_hec_logs | ingress throughput | -1.18 | [-1.26, -1.10] | 100.00% |
| syslog_humio_logs | ingress throughput | -1.35 | [-1.43, -1.27] | 100.00% |
| syslog_log2metric_splunk_hec_metrics | ingress throughput | -1.60 | [-1.69, -1.52] | 100.00% |
| syslog_log2metric_humio_metrics | ingress throughput | -2.18 | [-2.27, -2.09] | 100.00% |
| datadog_agent_remap_blackhole | ingress throughput | -2.63 | [-2.73, -2.52] | 100.00% |
| splunk_hec_route_s3 | ingress throughput | -3.90 | [-4.04, -3.75] | 100.00% |
Hello @jszwedko, thank you for the review.
I think that the systemd template service and the service don't have the same purpose.
The templated versions allow to isolate workload through the --config-toml option where the service allow to run all files on the directory /etc/vector.
To me, there are complementary so that's why which should distribute both.
Hello @jszwedko, thank you for the review.
I think that the systemd template service and the service don't have the same purpose. The templated versions allow to isolate workload through the
--config-tomloption where the service allow to run all files on the directory/etc/vector. To me, there are complementary so that's why which should distribute both.
Gotcha, yeah, I see. My main concern is these two sets of unit files becoming out-of-sync with each other. I realize the default one runs all files in /etc/vector. I'm just wondering if we could use the SystemD DefaultInstance directive and only ship the template files 🤔
Hello @jszwedko, I missed your last message.
I checked what does the DefaultInstance systemd directive and for what I understand, this will not emulate the same behavior as the current vector.service.
To explain myself, if we use the DefaultInstance systemd directive in the [email protected] in will point to only one file which will be config.toml(for the example, the directive could be [email protected]) whereas the vector.service will load and use all configurations in the directory /etc/vector that it found.
Hello @jszwedko, is it ok for you?
Apologies for the delay! I'm still not crazy about having to maintain both of these, almost, identical SystemD unit files because I predict they will fall out of sync. I'd really like to see us find a way to bring them together using DefaultInstance or at least add some tests that will fail if we update one file but forget to update the other (these tests could just use diff to compare for expected differences). What do you think?
I do see the challenges with using DefaultInstance here so I would be ok with just the additional tests that diff the file but ignore expected changes. Maybe something like:
diff <(cat distribution/systemd/vector.service | grep -v Description) <(cat distribution/systemd/[email protected] | grep -v Description)
(with additional grep -v patterns for each line we expect to be different)
Hello, Could we merge it? I will not do the modification for futures pull request as you are eager on the topic ;) and I think it is to reviewer to ensure that.
I think it is to reviewer to ensure that.
This is the bit I'm concerned about 😄 It'd be very easy to miss the fact that only one set of service definitions was updated when reviewing. I'd really like to see a test added for this before we merge.