KTS7 icon indicating copy to clipboard operation
KTS7 copied to clipboard

Error with Painless scripted field 'doc['flow_id'].value'.

Open myrsecurity opened this issue 4 years ago • 69 comments

Hi Ive tried to import the dashboards following the method

Request to Elasticsearch failed: {"error":{"root_cause":[{"type":"script_exception","reason":"runtime error","script_stack":["org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:94)","org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:41)","doc['flow_id'].value"," ^---- HERE"],"script":"doc['flow_id'].value","lang":"painless"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"logstash-2020.04.29-000001","node":"RmOnDn2mSsWSKkNKg2bgsA","reason":{"type":"script_exception","reason":"runtime error","script_stack":["org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:94)","org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:41)","doc['flow_id'].value"," ^---- HERE"],"script":"doc['flow_id'].value","lang":"painless","caused_by":{"type":"illegal_argument_exception","reason":"No field found for [flow_id] in mapping with types []"}}}]},"status":400}

Im reading from a Remote PFSENSE via Filebeats. The logs hit Elastic after all of the filtering etc..

image

Thank you

myrsecurity avatar Apr 29 '20 20:04 myrsecurity

How do you import the dashboards exactly ?

pevma avatar Apr 30 '20 09:04 pevma

I'm receiving the same script exception. Dashboards, etc. are imported via the curl commands provided on the README page. The issue is preventing events in the EventsList from being displayed. I'm using the logstash filter that is linked to off the README page. The following is further information from the SN-ALL dashboard. Please advise.

script_exception at shard 0index logstash-flow-2020.11.22node VURsDiwmTnyNCTmjTmpqmQ Type script_exception Reason runtime error Script stack org.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121) org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115) 'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase() ^---- HERE

Script 'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()

Lang painless Position offset 73 Position start 0 Position end 232 Caused by type illegal_state_exception Caused by reason A document doesn't have a value for a field! Use doc[].size()==0 to check if a document is missing a field!

alphaDev23 avatar Nov 22 '20 07:11 alphaDev23

Was able to reproduce. Will try to cook a patch today. I think it is related to a possible fix here- https://github.com/StamusNetworks/SELKS/issues/255#issuecomment-698536769

I would like to confirm - on which dahsboars/vizs does this appear ?

pevma avatar Nov 22 '20 09:11 pevma

I only have Elasticsearch indexes for: alert, fileinfo, flow, http, tls. The issue is only appearing on SN-ALERTS from the data I have.

As a note, I attempted to use Filebeat to send Suricata logs directly to Elasticsearch using the elasticsearch7-template.json provided template. I verified the template was loaded in Elasticsearch. However, I believe my filebeat.yml file was incorrectly configured because I was only able to get a logstash-<DATE> index, by modifying 'output.elasticsearch.index' and nothing was displayed in the dashboards. I'm not a Filebeat expert. If you have a filebeat.yml that works with the the template, it will eliminate the logstash service from the solution.

alphaDev23 avatar Nov 22 '20 18:11 alphaDev23

Were the indexes created/existed in Kibana/Management ?

pevma avatar Nov 24 '20 13:11 pevma

The indexes were created through the logstash template provided off the README page. It is a slight modification given that 'type' doesn't exist in 7.x. The indexes did not exist prior to instantiating the stack.

alphaDev23 avatar Nov 24 '20 17:11 alphaDev23

Ok - just to confirm , the issue appears only on SN-ALL or on SN-ALERTS, from the error it comes in from the logstash-flow... index which is not used i think in SN-ALERTS.

pevma avatar Nov 24 '20 21:11 pevma

I made a mistake in my last comment. It is only appearing on SN-ALL. I do not have any data in SN-ALERTS so I'm not able to confirm whether it occurs in SN-ALERTS.

alphaDev23 avatar Nov 24 '20 22:11 alphaDev23

Any update on the above?

alphaDev23 avatar Nov 30 '20 04:11 alphaDev23

This patch fixes the issue as mentioned here - https://github.com/StamusNetworks/KTS7/issues/1#issuecomment-731723442 It is either you can patch it up manually on each scripted field for each index - aka for example logstash-alert* / logstash-http* etc in Kibana Management . Or it should also be taken care of on the next dashboards release, planned this week.
Apologies for the delay !

pevma avatar Nov 30 '20 07:11 pevma

No worries. Thank you for fixing. Fantastic work on these dashboards, btw!

alphaDev23 avatar Nov 30 '20 22:11 alphaDev23

*Running SELKS 6 + ELK 7.10.0 + X-Pack enabled, so all communications are via https

I am having the same issue. So, the solution is just to enable the "community_id" in Suricata config and restart Suricata, or do I need to perform more steps?

Should I use doc['community_id.keyword'].value or doc['community_id'].value?

Thank you

ManuelFFF avatar Dec 01 '20 16:12 ManuelFFF

It does not seem the issue is related? For enabling the community id - yest it just needs to be enabled and suricata restarted.

pevma avatar Dec 01 '20 17:12 pevma

Hi @pevma,

Like I said, I am experiencing the same issue. When I open Discover in Kibana, there's always a pop-up warning stating there is an issue with 2/15 shards. Please see the screenshots below:

Shard error Shard error 2 Shard error 3

This issue starts as soon I enable X-Pack and all the communications turned over https protocol. We have talked about this matter and some side effects this brings to SELKS suite in other posts. I was hopping that a new SELKS release or patch would fix this and other issues, that just appears if the user enables X-Pack with basic security features in ELK. Then I saw this post and I thought that maybe there is an easy way to address this issue, since other users have seen the same error.

I tried enabling the community_id in Suricata config, then restarted Suricata and Evebox. The issue do not disappear, just mutate into a different error, as you can see here: No community_id field

It does not make any difference if I add or leave the .keyword. Maybe I am missing additional important steps. I hope you can help me to make this error go away.

Thank you

ManuelFFF avatar Dec 01 '20 18:12 ManuelFFF

Any advise?

ManuelFFF avatar Dec 03 '20 13:12 ManuelFFF

Think you should use it without the .keyword Before that you should make sure you see it properly in the json logs (eve.json) - there should be a community flow id key/record in the logs.

pevma avatar Dec 03 '20 22:12 pevma

Hi,

I only tried the .keyword because of this comment https://github.com/StamusNetworks/SELKS/issues/255#issuecomment-698792496, but even that did not resolve the issue.

Checking the eve.json logs I can see flow_id field and also the community_id field:

{"timestamp":"2020-12-04T08:50:26.651146-0500","flow_id":1308048361440886,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.128","src_port":58589,"dest_ip":"239.255.255.250","dest_port":3702,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":7,"pkts_toclient":0,"bytes_toserver":4886,"bytes_toclient":0,"start":"2020-12-04T08:47:26.378486-0500","end":"2020-12-04T08:47:33.171907-0500","age":7,"state":"new","reason":"unknown","alerted":false},"community_id":"1:JJD9J+CckkTq2iKzZP6j8zVZjNY="}
{"timestamp":"2020-12-04T08:50:26.651523-0500","flow_id":1308048361440886,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.128","src_port":58589,"dest_ip":"239.255.255.250","dest_port":3702,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":7,"pkts_toclient":0,"bytes_toserver":4886,"bytes_toclient":0,"start":"2020-12-04T08:47:26.378486-0500","end":"2020-12-04T08:47:33.171907-0500","age":7,"state":"new","reason":"unknown","alerted":false},"community_id":"1:JJD9J+CckkTq2iKzZP6j8zVZjNY="}
{"timestamp":"2020-12-04T08:50:27.318169-0500","flow_id":2012176036617619,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.179","src_port":50754,"dest_ip":"224.0.0.252","dest_port":5355,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":2,"pkts_toclient":0,"bytes_toserver":150,"bytes_toclient":0,"start":"2020-12-04T08:47:15.613779-0500","end":"2020-12-04T08:47:16.020953-0500","age":1,"state":"new","reason":"unknown","alerted":false},"community_id":"1:eR0XiX1AMxyOvQcJd8kGHF+YIzY="}
{"timestamp":"2020-12-04T08:50:27.318319-0500","flow_id":2012176036617619,"in_iface":"enp2s0","event_type":"flow","src_ip":"192.168.1.179","src_port":50754,"dest_ip":"224.0.0.252","dest_port":5355,"proto":"UDP","app_proto":"failed","flow":{"pkts_toserver":2,"pkts_toclient":0,"bytes_toserver":150,"bytes_toclient":0,"start":"2020-12-04T08:47:15.613779-0500","end":"2020-12-04T08:47:16.020953-0500","age":1,"state":"new","reason":"unknown","alerted":false},"community_id":"1:eR0XiX1AMxyOvQcJd8kGHF+YIzY="}

The above logs are from a fresh SELKS 6 install and up to date, including ELK 7.10.0. I have not enabled the community_id field in suricata.yaml, but field is enabled in SELKS custom config file that overrides Suricata basic config (/etc/suricata/selks6-addin.yaml). So, the eve.json logs is including both fields: flow_id and community_id, and yet getting the shard errors related to the flow_id.

What would you recommend me to check/try next?

Thank you

ManuelFFF avatar Dec 04 '20 14:12 ManuelFFF

Where exactly are you making the change/addition in the scripted fields - is it in logstash-flow* index in Kibana management ? And on what discovery/viz you exactly get the error ?

pevma avatar Dec 05 '20 09:12 pevma

Hi,

Error appears when I check app Discover/logstash-*. Error it is NOT present if I check Discover/logstash-flow-*. I tried modifications on Index Patterns/logstash-*. Index Patterns/logstash-flow-* does not have a scripted field.

ManuelFFF avatar Dec 07 '20 14:12 ManuelFFF

Ok - so you mean if you do discovery with the index logstahs-* ? What about if you try for example logstash-dns-* or logstash-http-*

pevma avatar Dec 07 '20 14:12 pevma

Verified one by one all logs in Discover/logstash-protocol-*. Only Discover/logstash-* it's being affected

ManuelFFF avatar Dec 07 '20 19:12 ManuelFFF

Any thoughts?

ManuelFFF avatar Dec 09 '20 14:12 ManuelFFF

What do you use the index logstash-service-* for ? Out of curiosity if ok to ask

Apart from that I think it is a complain message - do the logs show up or not ?

-- Regards, Peter Manev

On 9 Dec 2020, at 15:16, ManuelFFF [email protected] wrote:

 Any thoughts?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

pevma avatar Dec 09 '20 15:12 pevma

Hi,

I am sorry if I wasn't clear enough on my previous message, so you could be able to help me. Index logstash-service-* does not really exist. I tried to use a pattern name to refer to all the following indexes:

logstash-*
logstash-alert-*
logstash-anomaly-*
logstash-dhcp-*
logstash-dnp3-*
logstash-dns-*
logstash-fileinfo-*
logstash-flow-*
logstash-http-*
logstash-ikev2-*
logstash-krb5-*
logstash-nfs-*
logstash-rdp-*
logstash-rfb-*
logstash-sip-*
logstash-smb-*
logstash-smtp-*
logstash-snmp-*
logstash-ssh-*
logstash-tftp-*
logstash-tls-*

Perhaps I should have used logstash-[event_type]-* instead or just use the exact index name like this time. What I wanted to say is that I checked all the previous indexes, one by one, and the error comes only when I check Discover/logstash-*

ManuelFFF avatar Dec 09 '20 17:12 ManuelFFF

I think using logstash-event_type-* is better in terms of zooming in the specific index/event_type. You can also look at any of the event types in their own dashboards including the raw events themselves at the bottom of every dashboard. So you just need to select the dashboard actually (From Kibana-> Dashboards) - for example SN-SMB will show you a dashboard with some visualizations and the raw logs of the event type SMB (or SMB protocol events).

pevma avatar Dec 10 '20 21:12 pevma

So, there is no way to fix this error? image

ManuelFFF avatar Dec 11 '20 13:12 ManuelFFF

You should be able to import the raw API exports from here -
https://github.com/StamusNetworks/KTS7#how-to-use to fix the issue.

pevma avatar Dec 14 '20 21:12 pevma

Was this issue resolved in the master branch? I just pulled and I'm receiving the following:

script_exception at shard 0index logstash-flow-2020.12.23node n6KVwvteRyaKlBCWbQPACwTypescript_exceptionReasonruntime errorScript stackorg.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121) org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115) 'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase() ^---- HEREScript'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()LangpainlessPosition offset73Position start0Position end232Caused by typeillegal_state_exceptionCaused by reasonA document doesn't have a value for a field! Use doc[].size()==0 to check if a document is missing a field!

alphaDev23 avatar Dec 23 '20 06:12 alphaDev23

Yes it is. Besides pulling the master branch you need to reload the dashboards

The other alternative is simply to use the selks-upgrade_stamus routine - that will auto update the dashboards pkg.after which you can reset/reload it from the gui.

https://github.com/StamusNetworks/SELKS/wiki/How-to-load-or-update-dashboards#from-scirius

-- Regards, Peter Manev

On 23 Dec 2020, at 07:28, alphaDev23 [email protected] wrote:

 Was this issue resolved in the master branch? I just pulled and I'm receiving the following:

script_exception at shard 0index logstash-flow-2020.12.23node n6KVwvteRyaKlBCWbQPACwTypescript_exceptionReasonruntime errorScript stackorg.elasticsearch.index.fielddata.ScriptDocValues$Longs.get(ScriptDocValues.java:121) org.elasticsearch.index.fielddata.ScriptDocValues$Longs.getValue(ScriptDocValues.java:115) 'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase() ^---- HEREScript'ip == ' + doc['src_ip.keyword'].value + ' && port == ' + doc['src_port'].value + ' && ip == ' + doc['dest_ip.keyword'].value + ' && port == ' + doc['dest_port'].value + ' && protocols == ' + doc['proto.keyword'].value.toLowerCase()LangpainlessPosition offset73Position start0Position end232Caused by typeillegal_state_exceptionCaused by reasonA document doesn't have a value for a field! Use doc[].size()==0 to check if a document is missing a field!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

pevma avatar Dec 23 '20 07:12 pevma

I've recreated the entire ELK stack. Same issue. Please advise.

alphaDev23 avatar Dec 23 '20 19:12 alphaDev23