kubernetes-event-exporter
kubernetes-event-exporter copied to clipboard
ES: only write ops with an op_type of create are allowed in data streams
Just deployed the exporter on our cluster and I get these kind of error message for every event which is send to ES. Googled a bit around and it seems to be related to an index send to ES which is not allowed anymore in latest ES/XPack version?
Error message:
[gray:]2021-12-24T14:22:41Z ERR bitnami/blacksmith-sandox/kubernetes-event-exporter-0.11.0/src/github.com/opsgenie/kubernetes-event-exporter/pkg/sinks/elasticsearch.go:144[teal:] > Indexing failed: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"only write ops with an op_type of create are allowed in data streams"}],"type":"illegal_argument_exception","reason":"only write ops with an op_type of create are allowed in data streams"},"status":400}
ES receiver config:
- name: "es"
elasticsearch:
hosts:
- "http://x.x.x.x:9200"
index: "aks-test-cl1-events"
indexFormat: "aks-test-cl1-events-{2006-01-02}"
useEventID: true
deDot: true
ES:
"version" : {
"number" : "7.16.2",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "2b937c44140b6559905130a8650c64dbd0879cfb",
"build_date" : "2021-12-18T19:42:46.604893745Z",
"build_snapshot" : false,
"lucene_version" : "8.10.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
}
XPack:
"build" : {
"hash" : "2b937c44140b6559905130a8650c64dbd0879cfb",
"date" : "2021-12-18T19:42:46.604893745Z"
}
Solved this issue by creating an Index template for the index pattern used by the event exporter without the Datasteam setting. This overrides the default index pattern which does enable the Datastream.
Before making changes, disable the exporter. E.g. by scaling the deployment to zero.
Steps in Kibana:
- Go to 'Stack Management' -> Data Streams
- Search for the index. If not found, enable hidden data streams via the 'View' pulldown on the right top.
- Remove the Datastream
- Go to 'Indices'
- Find all indices and remove them if needed
- Go to 'Index Templates'
- Create a new template
- name the template
- fill in index used by exporter
- set a priority to give this a higher priority then the default index templates. Value >0 and <100 (from ES docs)
- Click next to fill in other properties. I left all empty except for the index:
{ "lifecycle": { "name": "timeseries_weekly" }, "number_of_shards": "3", "number_of_replicas": "1" }
- validate preview
- create template
Then enable the Exporter again. E.g. scale to 1. The index got create automatically when Exporter was up. No more errors in the logs. Index now also contains data.
Although this workaround works, please update the exporter to pass the correct 'action' so data streams also work.
Disabling useEventID
also seems to solves this.
@diversit I am having same issue , We use AWS openseach(ES) version 6.8 , I don't see a options mentioned in https://github.com/opsgenie/kubernetes-event-exporter/issues/178#issuecomment-1029135988
{"level":"error","time":"2022-03-27T09:38:26Z","caller":"/app/pkg/sinks/elasticsearch.go:144","message":"Indexing failed: {\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [involvedObject.labels.app] tried to parse field [app] as object, but found a concrete value\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [involvedObject.labels.app] tried to parse field [app] as object, but found a concrete value\"},\"status\":400}"}
{"level":"error","time":"2022-03-27T09:38:14Z","caller":"/app/pkg/sinks/elasticsearch.go:144","message":"Indexing failed: {\"error\":{\"root_cause\":[{\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [involvedObject.labels.app] tried to parse field [app] as object, but found a concrete value\"}],\"type\":\"mapper_parsing_exception\",\"reason\":\"object mapping for [involvedObject.labels.app] tried to parse field [app] as object, but found a concrete value\"},\"status\":400}"}