Add system call event field for fd.rip_rport and fd.sip_sport
Motivation MISP server distributes some IP/port indicators using a category/type called ip-dst|port and ip-src|port - this is a combined field of IP:port (example: 8.8.8.8:53)
If we had a Falco field named fd.rip_rport and fd.sip_sport, I would be able to bring those indicators into a Falco list as a IP:Port pair (ex. items: ["8.8.8.8:53"] which would make it much cleaner than having to write a single rule for each - eg. fd.rip=IP and fd.rport=port
Feature
Please can we add a Falco field for fd.rip_rport and fd.sip_sport which is a combined IP:port pair.
Additional context reference: https://github.com/an1245/falco-misp-connector
Big plus one -- funny was thinking about it the past days as I was working on some other stuff, telepathy? So definitely supportive!
Question, would you want to give it a try or would you appreciate one of us taking it on?
Hi - I would appreciate someone else looking at it as I am a little bit stretched for time at the moment. But it would be good if someone can tag me on the PRs so I can learn how to propose/make those changes next time.
/assign
/milestone 0.18.0
Actually @leogr since we plan to add a new operator called join with a custom delimiter we could accomplish this that way.
If we agree on that we just track this issue as sub-issue of https://github.com/falcosecurity/libs/issues/1925.
[What I am not clear of yet is how the casting of the ip to string will work or if the operator applies after the field was casted to a string like in the case of fd.lip?]
I think it's a slightly different use case - maybe it's the same .
Example:
Join([fd.rip, fd.rport], ":"]) in ("8.8.8.8:53","4.4.4.4:53")
Missed to answer. Yes it would be an extension to allowing .join(fieldAsString1, fieldAsString2, fieldAsStringN, sep=":") and not limiting .join to lists only. Apologies, should have made this more clear in my initial post above.
In the world of Spark there is for example https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.concat_ws.html
Btw I personally would like concat better since join is typically associated with SQL table joins ...
Btw I personally would like
concatbetter sincejoinis typically associated with SQL table joins ... This naming applies to db only.
When in the context of arrays in programming languages, it is common (at least for me) to use join (joins array's elements into a string) and contact (merges one or more arrays)
Btw I personally would like
concatbetter sincejoinis typically associated with SQL table joins ... This naming applies to db only.When in the context of arrays in programming languages, it is common (at least for me) to use
join(joins array's elements into a string) andcontact(merges one or more arrays)
ACK, yeah as always it depends.
New Proposal:
Introduce 2 new operators
-
joinfor the initially planned use case of creating one string from an array -
concatfor the use cases of creating a string from 2 or more existing filtercheck fields.concat(delimiter, val(field1), val(field2), ...)or similar
[nit] Maybe given above always have the delimiter as first arg? So that the interface of passing the delimiter is more constant?
Posting this idea here, but we should update the other issue and possibly create a new one for the concat use cases, assuming we wanted to go down that route.
Introduce 2 new operators
joinfor the initially planned use case of creating one string from an arrayconcatfor the use cases of creating a string from 2 or more existing filtercheck fields.concat(delimiter, val(field1), val(field2), ...)or similar[nit] Maybe given above always have the
delimiteras first arg? So that the interface of passing the delimiter is more constant?
I like this proposal!
Suggestions:
- I'd prefer to have the
delimiteras the last arg, but I have no strong opinion atm - If possible, I'd avoid
val()in such a case so that it may be simplycontact(field1, field2, ..., delimiter)
I'll offer some feedback here - if you are going to allow a fixed number of fields (2) to be concatenated then it's probably OK for the delimiter to go last.
If you plan to allow an unlimited number of values to be concatenated, it might be best first, because it might become hard to read and interpret if it was last.
I'll offer some feedback here - if you are going to allow a fixed number of fields (2) to be concatenated then it's probably OK for the delimiter to go last.
If you plan to allow an unlimited number of values to be concatenated, it might be best first, because it might become hard to read and interpret if it was last.
+1 that's why many frameworks have it first and yes we should allow any number of fields so it's the most flexible interface.
@leogr ok yes let's do concat(delimiter, field1, field2, ..., fieldN) -- do you want to cut a new dedicated issue for that? Plus update the join() issue? e.g. delimiter should be the first arg for any of the new operators (if applicable).
@leogr ok yes let's do
concat(delimiter, field1, field2, ..., fieldN)-- do you want to cut a new dedicated issue for that? Plus update thejoin()issue? e.g. delimiter should be the first arg for any of the new operators (if applicable).
Ok. I will update join and create a separate issue for concat.
Meanwhile, I was thinking about an alternative way of implementing concat.
If we want to allow more general string manipulation use cases, concat without the delimiter arg might work better. e.g. concat(fd.rip, ":", fd.rport) would work great IMO. Even if the user has to repeat the delimiter multiple times, I don't see this as an issue since I don't expect users to pass a very long list of args.
Just an idea. wdyt? :thinking:
@leogr ok yes let's do
concat(delimiter, field1, field2, ..., fieldN)-- do you want to cut a new dedicated issue for that? Plus update thejoin()issue? e.g. delimiter should be the first arg for any of the new operators (if applicable).Ok. I will update
joinand create a separate issue forconcat.Meanwhile, I was thinking about an alternative way of implementing
concat. If we want to allow more general string manipulation use cases,concatwithout thedelimiterarg might work better. e.g.concat(fd.rip, ":", fd.rport)would work great IMO. Even if the user has to repeat the delimiter multiple times, I don't see this as an issue since I don't expect users to pass a very long list of args. Just an idea. wdyt? 🤔
Nice, I would like to have both.
Question: Can we support all use cases with an overloaded concat operator? Could be fair to limit delimiters to a set of valid / common delimiters. Or should we create 2 separate concat_* operators?
@an1245 more thoughts?
I like your latest suggestion @leogr - that is very flexible and I can see that easily supporting a number of use-cases. Either or both are good with me!
I like your latest suggestion @leogr - that is very flexible and I can see that easily supporting a number of use-cases. Either or both are good with me!
Thanks @an1245 for posting 🙃
Great, @leogr yes everyone likes your suggestion. If we can also support the other version with a delimiter arg it would be nice, but likely we can all also just live with the solution you proposed. Thanks.
/milestone 0.20.0
/milestone 0.21.0
Still no time during this release cycle, sorry. /milestone 0.22.0
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale