vector
vector copied to clipboard
enhancement(splunk_hec sink): Use a response cookie to route ack checks to the same Splunk indexer
Summary
This is particularly useful when running Splunk in a clustered environment with multiple indexer hosts. In this environment, acknowledgement IDs are frequently duplicated across multiple indexers (they all start at 0 and count upwards as they receive requests with the same X-Splunk-Request-Channel header, so there will be lots of reuse). One common way to distinguish between multiple hosts behind a load balancer is to return a cookie to specify which indexer to respond back to. This is the recommended way, for instance, to set up an AWS ELB for a Splunk indexer cluster such that it has cookie stickiness enabled:
https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureanELB https://community.splunk.com/t5/Getting-Data-In/How-to-configure-the-load-balancer-to-handle-HEC-data/td-p/742116
I'm not actually sure how acknowledgements would have worked with multiple indexers previously. From what I can tell, given the way it is structured with the hashmap keyed by ack ID, it would only ever work with single-indexer clusters due to the ack collision issue and not being able to find which indexer to properly query for acknowledgement details.
I'm also very new to writing Rust, so some of the stuff I've written might not be the best way to do things. Feedback is very welcome!
Change Type
- [X] Bug fix
- [X] New feature
- [ ] Non-functional (chore, refactoring, docs)
- [ ] Performance
Is this a breaking change?
- [ ] Yes
- [X] No
How did you test this PR?
I tested with some added unit and integration tests, as well as this local config and running cargo run --release -- --config test_config.yaml. This was sending to our Splunk cluster with 50+ indexers behind an AWS ALB and no events were dropped or any error logs generated over ~ 10 hours:
---
sources:
demo_logs:
type: demo_logs
format: shuffle
lines:
- "jvperrin test log"
sequence: true
sinks:
splunk:
type: splunk_hec_logs
inputs:
- demo_logs
acknowledgements:
enabled: true
indexer_acknowledgements_enabled: true
cookie_name: "AWSALB"
healthcheck:
enabled: true
endpoint: "<REDACTED>"
default_token: "<REDACTED>"
index: "vector_poc"
sourcetype: "jvperrin_test"
encoding:
codec: json
Does this PR include user facing changes?
- [X] Yes. I've added a changelog entry
References
- Closes: #19417
Hi @jvperrin, sorry for the delay on this one. Our Splunk IT suite needs some attention (https://github.com/vectordotdev/vector/issues/22379). One thing that stood out is that this new cookie is mandatory which makes this a breaking change. This should be an option to preserve backwards compatibility.
Hi @jvperrin, sorry for the delay on this one. Our Splunk IT suite needs some attention (#22379). One thing that stood out is that this new cookie is mandatory which makes this a breaking change. This should be an option to preserve backwards compatibility.
@pront does having cookie_name: String::new() at https://github.com/vectordotdev/vector/pull/23156/files#diff-e552ee7226373645b14f33a1cdfdd67eba694938b3f265ac2ac3105d37c75b5fR72 not make it optional? That's what the other optional settings there appeared to have and the docs generated as required: false so I was under the impression it was working properly. Having a default of "" (or actually, any singular static value) should preserve the existing behavior by grouping all the ack IDs into the same bucket, so that's why I added that as the default.
I did run the integration tests locally (with make test-integration-splunk) and they were working fine for me there, although I did notice that I couldn't add any newer Splunk versions than 8.2.4 (which is EOL as of ~ 2 years ago) as that'd cause the tests to break with the metrics sink. I wasn't seeing any timeouts or slowness though, but then again I wasn't running the tests in GitHub Actions either.
@pront does having
cookie_name: String::new()at #23156 (files) not make it optional?
Hello, see this example: https://github.com/vectordotdev/vector/blob/master/src/sources/host_metrics/mod.rs#L117-L120
That's what I was thinking about.
I did run the integration tests locally (with
make test-integration-splunk) and they were working fine for me there, although I did notice that I couldn't add any newer Splunk versions than 8.2.4 (which is EOL as of ~ 2 years ago) as that'd cause the tests to break with the metrics sink. I wasn't seeing any timeouts or slowness though, but then again I wasn't running the tests in GitHub Actions either.
Thank you for sharing this. Maybe we should enable the suite (on the master branch) and only ignore the failing tests.