GettingStartedWithELK
GettingStartedWithELK copied to clipboard
Kibana IPv4 Range Aggregation
If you have a log event with a source IP and destination IP, how do you index them in order to use the aggregation capability for field types of "ip" for both fields?
Come on, a few more clues might be nice 8)
Anyway, do you have a direction field that can be used to differentiate them? If not then you need to index them in a different way.
Can we see an example event?
Cheers Jon
@gerdesj for instance, take this csv line:
63.88.73.59,104.197.28.247,104.197.28.0,-,,-,Google Inc.,Mountain View,CA,US,Google Inc.,Mountain View,CA,US
There's an source and destination IP. I know grok has a %{IPV4} pattern, do I use that? Will using that pattern also set the field type to "ip"? What if I have multiple ip fields?
And also, what is a "direction field"? How would you index the multiple IP fields?
This really belongs on the forum. https://discuss.elastic.co/c/logstash
You need to differentiate the in and out addresses and I only mentioned a "direction" if your logs were directed, ie only in or out but you have both source and destination here in each log.
You could use grok but you have a nice comma separated log there with 14 odd fields, so have a look at the CSV filter. https://www.elastic.co/guide/en/logstash/current/plugins-filters-csv.html
@gerdesj I saw something from this repo, so wanted to retain some of the context. At first I was thinking about the "direction" the filters are used, but I see what you mean.
A small example I can think of are firewall logs, where both source and destination are logged as one event. And also netflow logs, where source and destination are logged together.
I tried the csv filter, but I the mutate filter wasn't cleaning up the beginning and trailing white spaces. So I moved to grok and get to see the exact grok parsing failure message, which is nice