alerting
alerting copied to clipboard
[BUG] [alerting_exception] analyzer [analyzer_keyword] has not been configured in mappings
What is the bug?
When defining a new monitor (under alerting) and selecting the type 'Per document monitor', the monitor saves with following error:
Furthermore, when "testing" the query it times out:
The query looks like this:
and as shown below, the query works when discovering:
How can one reproduce the bug? Steps to reproduce the behavior:
- Go to Alerting>Monitors>Create monitor
- Select 'Per document monitor', select any index and choose a query
- Go to Preview query and performance and wait..
- Try to save the monitor
What is the expected behavior? I don't know, hence it never worked for me.
What is your host/environment?
- OS: Centos7
- Opensearch Version: 2.7.0
- Opensearch-Dashboards Version: 2.7.0
NOTE We are ingesting the logs using graylog.
Transferred this issue to the Alerting plugin folder as alerting owns development and maintenance of Document level monitors.
@paasi6666 Could you please share your index mapping for us to reproduce the issue?
Sure. Note, that the mapping is generated by graylog.
{
"rpz_0": {
"mappings": {
"dynamic_templates": [
{
"internal_fields": {
"match": "gl2_*",
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
},
{
"store_generic": {
"match_mapping_type": "string",
"mapping": {
"type": "keyword"
}
}
}
],
"properties": {
"@metadata_beat": {
"type": "keyword"
},
"@metadata_type": {
"type": "keyword"
},
"@metadata_version": {
"type": "keyword"
},
"@timestamp": {
"type": "date"
},
"agent_ephemeral_id": {
"type": "keyword"
},
"agent_name": {
"type": "keyword"
},
"beats_type": {
"type": "keyword"
},
"client_id": {
"type": "keyword"
},
"event_action": {
"type": "keyword"
},
"full_message": {
"type": "text",
"analyzer": "standard"
},
"gl2_accounted_message_size": {
"type": "long"
},
"gl2_message_id": {
"type": "keyword"
},
"gl2_processing_error": {
"type": "keyword"
},
"gl2_processing_timestamp": {
"type": "date",
"format": "uuuu-MM-dd HH:mm:ss.SSS"
},
"gl2_receive_timestamp": {
"type": "date",
"format": "uuuu-MM-dd HH:mm:ss.SSS"
},
"gl2_remote_ip": {
"type": "keyword"
},
"gl2_remote_port": {
"type": "long"
},
"gl2_source_input": {
"type": "keyword"
},
"gl2_source_node": {
"type": "keyword"
},
"host_name": {
"type": "keyword"
},
"hostname": {
"type": "keyword"
},
"log_file_path": {
"type": "keyword"
},
"log_offset": {
"type": "long"
},
"loglevel": {
"type": "keyword"
},
"message": {
"type": "text",
"analyzer": "standard"
},
"query_action": {
"type": "keyword"
},
"query_class": {
"type": "keyword"
},
"query_name": {
"type": "keyword"
},
"query_type": {
"type": "keyword"
},
"rpz_category": {
"type": "keyword"
},
"rpz_message": {
"type": "keyword"
},
"rpz_zone": {
"type": "keyword"
},
"source": {
"type": "text",
"analyzer": "analyzer_keyword",
"fielddata": true
},
"source_ip": {
"type": "keyword"
},
"source_port": {
"type": "keyword"
},
"streams": {
"type": "keyword"
},
"timestamp": {
"type": "date",
"format": "uuuu-MM-dd HH:mm:ss.SSS"
},
"url_domain": {
"type": "keyword"
},
"url_short": {
"type": "keyword"
}
}
}
}
}
@lezzago After taking a look at the index mapping, I see the issue:
"source": {
"type": "text",
"analyzer": "analyzer_keyword",
"fielddata": true
},
How do i update this to work as intended?
@lezzago The settings of the index look like follows:
GET rpz_0/_settings
{
"rpz_0": {
"settings": {
"index": {
"number_of_shards": "4",
"provided_name": "rpz_0",
"creation_date": "1649938793819",
"analysis": {
"analyzer": {
"analyzer_keyword": {
"filter": "lowercase",
"tokenizer": "keyword"
}
}
},
"number_of_replicas": "0",
"uuid": "e8NRlQCHQfau984C3QGMPQ",
"version": {
"created": "7100299",
"upgraded": "136287827"
}
}
}
}
}
This issue has been reported in the OpenSearch forum https://forum.opensearch.org/t/alerting-exception-analyzer-analyzer-keyword-has-not-been-configured-in-mappings/14777.
I've tested this issue with an example from Elastic https://www.elastic.co/guide/en/elasticsearch/reference/current/analyzer.html
The result was exactly the same as reported in the forum case.
opensearch-node1_2.6.0 | org.opensearch.alerting.util.AlertingException: analyzer [my_analyzer] has not been configured in mappings
opensearch-node1_2.6.0 | at org.opensearch.alerting.util.AlertingException$Companion.wrap(AlertingException.kt:70) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0 | at org.opensearch.alerting.util.DocLevelMonitorQueries.updateQueryIndexMappings(DocLevelMonitorQueries.kt:369) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0 | at org.opensearch.alerting.util.DocLevelMonitorQueries.access$updateQueryIndexMappings(DocLevelMonitorQueries.kt:45) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0 | at org.opensearch.alerting.util.DocLevelMonitorQueries$updateQueryIndexMappings$1.invokeSuspend(DocLevelMonitorQueries.kt) ~[opensearch-alerting-2.6.0.0.jar:2.6.0.0]
opensearch-node1_2.6.0 | at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) [kotlin-stdlib-1.6.10.jar:1.6.10-release-923(1.6.10)]
opensearch-node1_2.6.0 | at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:285) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0 | at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0 | at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0 | at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:742) [kotlinx-coroutines-core-1.1.1.jar:?]
opensearch-node1_2.6.0 | Caused by: java.lang.Exception: java.lang.IllegalArgumentException: analyzer [my_analyzer] has not been configured in mappings
opensearch-node1_2.6.0 | ... 9 more
opensearch-node1_2.6.0 | [2023-06-21T15:04:14,379][ERROR][o.o.a.u.AlertingException] [opensearch-node1] Alerting error: AlertingException[analyzer [my_analyzer] has not been configured in mappings]; nested: Exception[java.lang.IllegalArgumentException: analyzer [my_analyzer] has not been configured in mappings];
I've tested that analyzer with examples from the link and it works with no issues but for some reason Alerting plugin can't see the definition of that analyzer when creating a Monitor. Looking at the logs, it looks like the Analyzer plugin trigger errors when the index is selected in the Monitor and before hitting the create button. I assume there is a validation running before the Monitor is created.
This is likely because Alerting plugin doesn't copy analyzer def from source index settings to queryIndex settings.
Missing feature or bug? Maybe a OpenSearch developer can take a look at this?
Analyzer updates are static config changes to an index. they would require closing an index > apply analyzer setting change > re-open index.
Closing alerting query index is not possible as all monitors share query index and monitors are running in parallel. We cannot support this in current architecture
@eirsep So any index with a custom analyzer or normalizer simply can not have a monitor configured on it? And this is acceptable to the OpenSearch team?! Since this is tied to security detectors as well, this should be a considered a major bug, not to be closed lightly!
https://github.com/opensearch-project/security-analytics/issues/697