beats
beats copied to clipboard
Add integration tests between the Beats shipper client and the shipper server
The purpose of this test is to verify that the current version of the shipper client is able to successfully publish events to the shipper server. We could take the following steps:
- We run filebeat and the shipper server as binaries or containers, no code dependencies
- We pre-define configuration that would allow: 2.1. filebeat to read a file that has N events 2.2. filebeat to alter the consumed events from the file (simple processing, like adding a field) 2.3. filebeat to publish events from the file to the shipper server 2.4. the shipper server to output these events to a file
- We pre-define an input file and an expected output file according to the configured alternation rules. The alternation is needed to make the difference more obvious and mark that events passed the pipeline
- We run the pipeline until the shipper server has written N events to the file or time is up (perhaps watch logs in debug more or use
wc -l
on the output file.
Dependencies:
- [ ] Publishing the shipper as a docker image for consumption by the Beat output tests.
- [x] https://github.com/elastic/elastic-agent-shipper/issues/83
@cmacknz feel free to add your thoughts on this.
Looks good to me, having this test (and the more advanced tests) will be really valuable. Thanks!
Related, https://github.com/elastic/elastic-agent-shipper/issues/60 to write the first basic test that communicates with the shipper binary. Could also be done as part of this issue depending on progress and the order things happen in.
~Actually, looks like this covers https://github.com/elastic/elastic-agent-shipper/issues/60 already. Closing that one as a duplicate.~
~Switching assignee to @faec like elastic/elastic-agent-shipper#60 was to start.~
Re-reading this issue it is written to test filebeat (or any beat really) communicating with the shipper. We should do this but we can start with something simpler like elastic/elastic-agent-shipper#60 which just starts the shipper in isolation.
Let's write one simple test first as described in elastic/elastic-agent-shipper#60 that doesn't require using the beat shipper client for simplicity, and then follow up with this more complex set of tests.
Both tests are now requirements of https://github.com/elastic/elastic-agent-shipper/issues/34 which Denis owns.
We can have the shipper publish a Docker image as part of it's CI process and just consume that in the beats build as we do for the other outputs we test against.
Here's a relatively simple Jenkinsfile demonstrating how to publish a Docker image as an example: https://github.com/elastic/stream/blob/439bc8b880175812ed5441be8237419fba29d877/.ci/Jenkinsfile#L112
@leehinman Is there any overlap with your actual work there: https://github.com/elastic/elastic-agent-shipper/issues/60 cc @pierrehilbert @cmacknz
https://github.com/elastic/elastic-agent-shipper/issues/60 is a test of the shipper process in the shipper repository.
This issue is a test of the Beats shipper client in the Beats repository.
They will need to be two separate implementations.
It may make sense to wait for the completion of https://github.com/elastic/elastic-agent-shipper/issues/161 before implementing this, as the shipper configuration format is going to change significantly when that issue is completed.
If we can completely encapsulate the shipper configuration in a docker container, then we don't necessarily need to wait for https://github.com/elastic/elastic-agent-shipper/issues/161 as we can just update the container we publish as part of that work.
Hi! We just realized that we haven't looked into this issue in a while. We're sorry!
We're labeling this issue as Stale
to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1
.
Thank you for your contribution!
Closing this one as no more necessary with the strategy change.