datadog-agent
datadog-agent copied to clipboard
Bump the franz-go group with 2 updates
Bumps the franz-go group with 2 updates: github.com/twmb/franz-go and github.com/twmb/franz-go/pkg/kadm.
Updates github.com/twmb/franz-go from 1.17.0 to 1.17.1
Changelog
Sourced from github.com/twmb/franz-go's changelog.
v1.17.1
This patch release fixes four bugs (two are fixed in one commit), contains two internal improvements, and adds two other minor changes.
Bug fixes
If you were using the
MaxBufferedBytesoption and ever hit the max, odds are likely that you would experience a deadlock eventually. That has been fixed.If you ever produced a record with no topic field and without using
DefaultProduceTopic, or if you produced a transactional record while not in a transaction, AND if the client was at the maximum buffered records, odds are you would eventually deadlock. This has been fixed.It was previously not possible to set lz4 compression levels.
There was a data race on a boolean field if a produce request was being written at the same time a metadata update happened, and if the metadata update has an error on the topic or partition that is actively being written. Note that the race was unlikely and if you experienced it, you would have noticed an OutOfOrderSequenceNumber error. See this comment for more details.
Improvements
Canceling the context you pass to
Producenow propagates in two more areas: the initialInitProducerIDrequest that occurs the first time you produce, and if the client is internally backing off due to a produce request failure. Note that there is no guarantee on which context is used for cancelation if you produce many records, and the client does not allow canceling if it is currently unsafe to do so. However, this does mean that if your cluster is somewhat down such thatInitProducerIDis failing on your new client, you can now actually cause theProduceto quit. See this comment for what it means for a record to be "safe" to fail.The client now ignores aborted records while consuming only if you have configured
FetchIsolationLevel(ReadCommitted()). Previously, the client relied entirely on theFetchResponseAbortedTransactionsfield, but it's possible that brokers could send aborted transactions even when not using read committed. Specifically, this was a behavior difference in Redpanda, and the KIP that introduced transactions and all relevant documents do not mention what the broker behavior actually should be here. Redpanda itself was also changed to not send aborted transactions when using read committed, but we may as well improve franz-go as well.Decompression now better reuses buffers under the hood, reducing allocations.
Brokers that return preferred replicas to fetch from now causes an info level log in the client.
... (truncated)
Commits
8b955b4Merge pull request #793 from twmb/changelog-v1.17.17f8a294CHANGELOG: update for 1.17.1afcb32bMerge pull request #792 from twmb/769305d8dckgo: allow record ctx cancelation to propagate a bit mored4982d7kgo: add failure for 769718591aMerge pull request #787 from twmb/7774e14d75Merge pull request #786 from twmb/785e16c46cMerge pull request #781 from asg0451/fix-lz4-compression-levels187266aMerge pull request #774 from kalbhor/master940ed68Merge pull request #762 from twmb/preferred_log- Additional commits viewable in compare view
Updates github.com/twmb/franz-go/pkg/kadm from 1.12.0 to 1.13.0
Changelog
Sourced from github.com/twmb/franz-go/pkg/kadm's changelog.
v1.13.0
This release contains a few new APIs, two rare bug fixes, updates to plugins, and changes the library to now require 1.18.
Go version
This library has supported Go 1.15 since the beginning. There have been many useful features that this library has not been able to use because of continued backcompat for 1.15. There is really no reason to support such an old version of Go, and Go itself does not support releases prior to 1.18 -- and 1.18 is currently only supported for security backports. Switching to 1.18 allows this library to remove a few 1.15 / 1.16 backcompat files, and allows switching this library from
interface{}toany.Behavior changes
If group consuming fails with an error that looks non-retryable, the error is now injected into polling as a fake errored fetch. Multiple people have ran into problems where their group consumers were failing due to ACLs or due to network issues, and it is hard to detect these failures: you either have to pay close attention to logs, or you have to hook into HookGroupManageError. Now, the error is injected into polling.
Bug fixes
This release contains two bug fixes, one of which is very rare to encounter, and one of which is very easy to encounter but requires configuring the client in a way that (nearly) nobody does.
Rare: If you were using EndAndBeginTransaction, there was an internal race that could result in a deadlock.
Rare configuration: If you configured balancers manually, and you configured CooperativeSticky with any other eager balancer, then the client would internally sometimes think it was eager consuming, and sometimes think it was cooperative consuming. This would result in stuck partitions while consuming.
Features
- HookClientClosed: A new hook that allows a callback when the client is closed.
- HookProduceRecordPartitioned: A new hook that is called when a record's partition is chosen.
- Client.ConfigValue: Returns the value for any configuration option.
- Client.ConfigValues: Returns the values for any configuration option (for multiple value options, or for strings that are internally string pointers).
- kadm: a few minor API improvements.
- plugin/klogr: A new plugin that satisfies the go-logr/logr interfaces.
- pkg/kfake: A new experimental package that will be added to over time, this mocks brokers and can be used in unit testing (only basic producing & consuming supported so far).
... (truncated)
Commits
d288770Merge pull request #393 from twmb/changelog-v1.13.0b9ce0f6CHANGELOG: note incoming v1.13.0d845069Merge pull request #392 from twmb/txn_bugfix1b229cekgo: bugfix transaction ending & beginning0eae6aeMerge pull request #391 from twmb/no_consume_recreatedd27690akgo: internal resilience9a8f04aRevert "kgo: add ConsumeRecreatedTopics option, for UNKNOWN_TOPIC_ID"ee51d5aMerge pull request #386 from twmb/hooks461d2efkgo: add HookClientClosed and HookProduceRecordPartitioned24afab3Merge pull request #383 from twmb/opt_interrogate- Additional commits viewable in compare view
You can trigger a rebase of this PR by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore <dependency name> major versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)@dependabot ignore <dependency name> minor versionwill close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)@dependabot ignore <dependency name>will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)@dependabot unignore <dependency name>will remove all of the ignore conditions of the specified dependency@dependabot unignore <dependency name> <ignore condition>will remove the ignore condition of the specified dependency and ignore conditions
Note Automatic rebases have been disabled on this pull request as it has been open for over 30 days.
Regression Detector
Regression Detector Results
Run ID: 678fe3d2-fba7-4c8f-a8e6-a47fda4e0a46 Metrics dashboard Target profiles
Baseline: 5e6ddb34f1bbaf6cd21cb6cd09c1d4a82479b561 Comparison: 0e5f2ffed65a12c7e39341346967ce877c0619b8
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
No significant changes in experiment optimization goals
Confidence level: 90.00% Effect size tolerance: |Δ mean %| ≥ 5.00%
There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
Fine details of change detection per experiment
| perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
|---|---|---|---|---|---|---|
| ➖ | pycheck_lots_of_tags | % cpu utilization | +0.66 | [-1.88, +3.19] | 1 | Logs |
| ➖ | otel_to_otel_logs | ingress throughput | +0.60 | [-0.21, +1.42] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.48 | [-0.24, +1.20] | 1 | Logs |
| ➖ | tcp_syslog_to_blackhole | ingress throughput | +0.41 | [+0.36, +0.47] | 1 | Logs |
| ➖ | idle_all_features | memory utilization | +0.21 | [+0.11, +0.32] | 1 | Logs bounds checks dashboard |
| ➖ | file_to_blackhole_300ms_latency | egress throughput | +0.09 | [-0.10, +0.27] | 1 | Logs |
| ➖ | file_tree | memory utilization | +0.03 | [-0.09, +0.16] | 1 | Logs |
| ➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.09, +0.11] | 1 | Logs |
| ➖ | file_to_blackhole_100ms_latency | egress throughput | +0.01 | [-0.22, +0.23] | 1 | Logs |
| ➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
| ➖ | file_to_blackhole_0ms_latency | egress throughput | -0.01 | [-0.34, +0.32] | 1 | Logs |
| ➖ | file_to_blackhole_1000ms_latency | egress throughput | -0.07 | [-0.56, +0.42] | 1 | Logs |
| ➖ | file_to_blackhole_500ms_latency | egress throughput | -0.26 | [-0.50, -0.01] | 1 | Logs |
| ➖ | idle | memory utilization | -0.28 | [-0.33, -0.23] | 1 | Logs bounds checks dashboard |
| ➖ | basic_py_check | % cpu utilization | -1.50 | [-4.31, +1.30] | 1 | Logs |
Bounds Checks
| perf | experiment | bounds_check_name | replicates_passed |
|---|---|---|---|
| ✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 |
| ✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 |
| ✅ | idle | memory_usage | 10/10 |
| ✅ | idle_all_features | memory_usage | 10/10 |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
Looks like these dependencies are updatable in another way, so this is no longer needed.