vector
vector copied to clipboard
chore(deps): Try LTO to see if it fixes linking issues on old OSes
Seeing if this fixes nix compilation as suggested by https://github.com/nix-rust/nix/issues/1972#issuecomment-1521047819
Signed-off-by: Jesse Szwedko [email protected]
Deploy Preview for vector-project ready!
| Name | Link |
|---|---|
| Latest commit | 33338d297dcd5d13a2aa8e148e50596a93fe9ba4 |
| Latest deploy log | https://app.netlify.com/sites/vector-project/deploys/64594039a2eba60008b61586 |
| Deploy Preview | https://deploy-preview-17342--vector-project.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site settings.
Deploy Preview for vrl-playground canceled.
| Name | Link |
|---|---|
| Latest commit | 33338d297dcd5d13a2aa8e148e50596a93fe9ba4 |
| Latest deploy log | https://app.netlify.com/sites/vrl-playground/deploys/64594039a8d32900081ffa90 |
Datadog Report
Branch report: jszwedko/try-lto-nix
Commit report: 78e77bd
:white_check_mark: vector: 0 Failed, 0 New Flaky, 5 Passed, 0 Skipped, 10.03s Wall Time
Regression Detector Results
Run ID: 7ee92b96-2091-4999-8a39-acd6823a5d8b
Baseline: bf8376c3030e6d6df61ca245f2d8be87443bf075
Comparison: 33338d297dcd5d13a2aa8e148e50596a93fe9ba4
Total vector CPUs: 7
Explanation
A regression test is an integrated performance test for vector in a repeatable rig, with varying configuration for vector. What follows is a statistical summary of a brief vector run for each configuration across SHAs given above. The goal of these tests are to determine quickly if vector performance is changed and to what degree by a pull request.
Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.
We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:
-
The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.
-
Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.
The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.
Changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%:
| experiment | goal | Δ mean % | confidence |
|---|---|---|---|
| syslog_log2metric_splunk_hec_metrics | ingress throughput | +28.37 | 100.00% |
| syslog_splunk_hec_logs | ingress throughput | +19.02 | 100.00% |
| syslog_loki | ingress throughput | +17.88 | 100.00% |
| syslog_humio_logs | ingress throughput | +16.30 | 100.00% |
| syslog_regex_logs2metric_ddmetrics | ingress throughput | +15.44 | 100.00% |
| datadog_agent_remap_blackhole_acks | ingress throughput | +14.56 | 100.00% |
| datadog_agent_remap_datadog_logs | ingress throughput | +12.91 | 100.00% |
| datadog_agent_remap_datadog_logs_acks | ingress throughput | +12.89 | 100.00% |
| datadog_agent_remap_blackhole | ingress throughput | +12.25 | 100.00% |
| otlp_http_to_blackhole | ingress throughput | +11.62 | 100.00% |
| otlp_grpc_to_blackhole | ingress throughput | +11.23 | 100.00% |
| http_text_to_http_json | ingress throughput | +8.11 | 100.00% |
| socket_to_socket_blackhole | ingress throughput | +7.70 | 100.00% |
| syslog_log2metric_humio_metrics | ingress throughput | +7.28 | 100.00% |
| splunk_hec_route_s3 | ingress throughput | +6.04 | 100.00% |
Fine details of change detection per experiment.
| experiment | goal | Δ mean % | Δ mean % CI | confidence |
|---|---|---|---|---|
| syslog_log2metric_splunk_hec_metrics | ingress throughput | +28.37 | [+27.98, +28.77] | 100.00% |
| syslog_splunk_hec_logs | ingress throughput | +19.02 | [+18.96, +19.09] | 100.00% |
| syslog_loki | ingress throughput | +17.88 | [+17.81, +17.96] | 100.00% |
| syslog_humio_logs | ingress throughput | +16.30 | [+16.19, +16.41] | 100.00% |
| syslog_regex_logs2metric_ddmetrics | ingress throughput | +15.44 | [+15.13, +15.76] | 100.00% |
| datadog_agent_remap_blackhole_acks | ingress throughput | +14.56 | [+14.48, +14.64] | 100.00% |
| datadog_agent_remap_datadog_logs | ingress throughput | +12.91 | [+12.83, +13.00] | 100.00% |
| datadog_agent_remap_datadog_logs_acks | ingress throughput | +12.89 | [+12.78, +13.00] | 100.00% |
| datadog_agent_remap_blackhole | ingress throughput | +12.25 | [+12.09, +12.42] | 100.00% |
| otlp_http_to_blackhole | ingress throughput | +11.62 | [+11.44, +11.79] | 100.00% |
| otlp_grpc_to_blackhole | ingress throughput | +11.23 | [+11.11, +11.34] | 100.00% |
| http_text_to_http_json | ingress throughput | +8.11 | [+8.03, +8.18] | 100.00% |
| socket_to_socket_blackhole | ingress throughput | +7.70 | [+7.64, +7.75] | 100.00% |
| syslog_log2metric_humio_metrics | ingress throughput | +7.28 | [+7.16, +7.41] | 100.00% |
| splunk_hec_route_s3 | ingress throughput | +6.04 | [+5.90, +6.17] | 100.00% |
| http_to_http_json | ingress throughput | +0.52 | [+0.47, +0.57] | 100.00% |
| file_to_blackhole | ingress throughput | +0.04 | [-0.00, +0.09] | 75.03% |
| enterprise_http_to_http | ingress throughput | +0.02 | [-0.01, +0.05] | 54.31% |
| fluent_elasticsearch | ingress throughput | +0.00 | [-0.00, +0.00] | 38.47% |
| http_to_http_noack | ingress throughput | -0.00 | [-0.06, +0.06] | 0.82% |
| splunk_hec_to_splunk_hec_logs_noack | ingress throughput | -0.00 | [-0.05, +0.04] | 11.06% |
| splunk_hec_indexer_ack_blackhole | ingress throughput | -0.01 | [-0.05, +0.04] | 17.96% |
| splunk_hec_to_splunk_hec_logs_acks | ingress throughput | -0.01 | [-0.07, +0.04] | 22.66% |
| http_to_http_acks | ingress throughput | -0.04 | [-1.25, +1.18] | 3.02% |
@zamazan4ik you might find this interesting too :) Turns out we weren't doing LTO on release assets.
Hah, I was sure it's enabled since Vector has https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ... :)
Hah, I was sure it's enabled since Vector has https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ... :)
Aha, actually you are right :D We just don't use those flags for the performance tests apparently.
You might want to add the other optimization options from: https://github.com/cloud-hypervisor/cloud-hypervisor/blob/main/Cargo.toml#L20-L24
Right now I'm seeing:
276M Jan 31 23:19 ../vector-non-lto*
146M Feb 7 18:42 target/release/vector* # with LTO
69M Feb 1 00:03 target/release/vector* # release profile options from c-h
How does enabling the opt-level = "s" option influence Vector performance? I think we cannot simply switch from opt-level = 3 to opt-level = "s" without benchmarking. AFAIK, for Vector right now performance is more important than the binary size. And do not forget, the actual release flags for Vector are described here - https://github.com/vectordotdev/vector/blob/master/scripts/environment/release-flags.sh ;)
How does enabling the opt-level = "s" option influence Vector performance?
I had opt-level = "s" on one of my instances and redeployed without, and apparently the utilization has actually increased on average:
AFAIK, for Vector right now performance is more important than the binary size
From the perspective of Linux distribution packages - binary size is much more important. These are the numbers for NixOS:
$ eza -lh /nix/store/*vector*/bin/vector
Permissions Size User Date Modified Name
.r-xr-xr-x 238M root 1 Jan 1970 /nix/store/1bsflz2c1kr9gmfkd7jmh6swh892mdy1-vector-0.34.1/bin/vector # Normal
.r-xr-xr-x 86M root 1 Jan 1970 /nix/store/3bjgilkwn2s82qry3lx0gigpp7wqg8kz-vector-0.34.1/bin/vector # LTO/stripped
.r-xr-xr-x 109M root 1 Jan 1970 /nix/store/3w2sqc4353j9zz5yzl5qjcz003jwx1cn-vector-0.34.1/bin/vector # LTO
Currently, for each Vector release/package update - the supporting infrastructure (which might not be cheap) has to push out a 238M binary package to every single instance deployed (rather than a much slimmer 86M which does the exact same thing) - and sure, I could just change that at the NixOS package level, but better to get it upstreamed for every other distro out there too.
Vector releases do enable LTO (per https://github.com/vectordotdev/vector/pull/17342#issuecomment-1932674613 this is done by changing the flags in CI). The binaries we distribute are around ~120 MB. Certainly they could be smaller, but, as @zamazan4ik notes, Vector's focus is performance and so the compilation is optimized for that. I can see that Nix is sensitive to the package size though, maybe it'd make sense to use opt-level = "s" when compiling Vector for distribution on that platform?
I swear the binaries used to be smaller too, around 80 MB. I might bisect down to see if there were specific commits that bumped us up significantly or if it was a slow burn.
Vector releases do enable LTO (per https://github.com/vectordotdev/vector/pull/17342#issuecomment-1932674613 this is done by changing the flags in CI). The binaries we distribute are around ~120 MB.
The issue is that no downstream Linux distribution is going to use the prebuilt binaries. I could patch this in, but much easier if this just gets merged.
I pushed the same thing to another Rust project last month and it's even landed in the stable NixOS release as of yesterday:
.r-xr-xr-x 61M root 1 Jan 1970 /nix/store/98ka7zbb7x88vv23yic0h99nx1spv4s7-garage-0.9.0/bin/garage
.r-xr-xr-x 28M root 1 Jan 1970 /nix/store/dl68bfmrkbn98qlpa2i84j3qpxsixkzp-garage-0.9.2/bin/garage
Vector releases do enable LTO (per #17342 (comment) this is done by changing the flags in CI). The binaries we distribute are around ~120 MB.
The issue is that no downstream Linux distribution is going to use the prebuilt binaries. I could patch this in, but much easier if this just gets merged.
I pushed the same thing to another Rust project last month and it's even landed in the stable NixOS release as of yesterday:
.r-xr-xr-x 61M root 1 Jan 1970 /nix/store/98ka7zbb7x88vv23yic0h99nx1spv4s7-garage-0.9.0/bin/garage .r-xr-xr-x 28M root 1 Jan 1970 /nix/store/dl68bfmrkbn98qlpa2i84j3qpxsixkzp-garage-0.9.2/bin/garage
That's a good point :) I opened https://github.com/vectordotdev/vector/pull/20034. Let me know what you think of that.