beats
beats copied to clipboard
Elastic Agent support output types similar to other *beats
Describe the enhancement: Elastic Agent should support all the same output types as other *beats.
Describe a specific use case for the enhancement or feature: Is it safe to assume that given Elastic Agent is built on top of *beats technology that it'll support the use of different output types such as Kafka?
- https://www.elastic.co/guide/en/beats/filebeat/master/kafka-output.html
For reference I asked in discuss - https://discuss.elastic.co/t/supported-output-types-for-elastic-agent/258901/1.
Pinging @elastic/agent (Team:Agent)
@slmingol Yes our intention is to add support for more output types, but we'll probably add them one at a time because the Elastic Agent now uses data streams and it has a new indexing strategy. We want to ensure the full end to end experience is good from agent, to the output system (logstash/kafka) and then to Elasticsearch data streams. The next up on our list is Logstash and it should be coming soon.
When is fleet-managed elastic-agents output to logstash planned to be released? Would be really great to have that functionality as soon as possible, so we can start to roll out agents in masses :). There is probably no environment where all the endpoints are permitted to all connect directly to a central elasticsearch cluster. So 'regional' hubs (like logstash) and fleet management are essential to be able to roll out elastic-agent in larger enterprise environments with many network zones. We would like to this to work with fleet managed agents: agent--> logstash -->kafka<--logstash-->elastic. The thing we dont have working is the first part: managed agent--> logstash.
@mostlyjason do you have any updates on when logstash/kafka support might be added? For our use case we'd really prefer to be using Elastic Agent instead of Filebeats, but the Kafka integration is a must-have
I just wanted to elaborate on importance of this.
Data in elasticsearch is considered as processed data. That means - original event/log will be already processed for elasticsearch to consume. As part of that, we might lose some data or in some cases processing might not happen because message did not come in expected format or field mapping error etc. There is too many ways of failing it and we can possibly lose data.
With kafka it is much easier - One reader does send for elastic and other reader sends unmodified events to long term storage.
From security/compliance perspective we need to ensure, that no event is lost and original message is preserved for forensics.
I can see the limited logstash output is added (standalone mode only), but again - that is unecessary complication to forward to kafka. Either I need logstash per kafka topic, or I need to write something which would scan documents and make decision which topic to forward. In case of failed decision, again - we can lose data.
Agree with all the above ^ we would love to switch from using beats to the elastic agent but can't because there is no support for the kafka output. Kafka is vital for us to ensure that we don't lose any logs in the event of failure within the system etc. Would be awesome to get the kafka output added in!
Thanks
Is there any update on this? Is there any plans for this in the near future?
We are actively working on enabling all output types for all inputs for the Elastic Agent. This is all related to the architecture changes that are being done to use https://github.com/elastic/elastic-agent-shipper with the Elastic Agent.
Awesome thanks for the update @blakerouse !
So, 8.9 released and still no kafka output for the fleet agents. Will this ever materialize? Is it really on the roadmap?
Good morning @tallyoh, the kafka output is indeed on the roadmap and has been worked for our next version. All related details can be found here: https://github.com/elastic/kibana/issues/143324 Hence closing this issue as done.