index-management icon indicating copy to clipboard operation
index-management copied to clipboard

Previous action was not able to update IndexMetaData

Open arnitolog opened this issue 4 years ago • 15 comments

Hello, I noticed that several indexes have status Failed with the error: "Previous action was not able to update IndexMetaData". I think it happens after data nodes restart, but not sure. Is there any way to configure automatic retry for such error. My policy is below: { "policy": { "policy_id": "ingest_policy", "description": "Default policy", "last_updated_time": 1574686046552, "schema_version": 1, "error_notification": null, "default_state": "ingest", "states": [ { "name": "ingest", "actions": [], "transitions": [ { "state_name": "search", "conditions": { "min_index_age": "4d" } } ] }, { "name": "search", "actions": [ { "timeout": "2h", "retry": { "count": 5, "backoff": "constant", "delay": "1h" }, "force_merge": {"max_num_segments": 1 } } ], "transitions": [ { "state_name": "delete", "conditions": {"min_index_age": "30d"} } ] }, { "name": "delete", "actions": [ { "timeout": "2h", "retry": { "count": 5, "backoff": "constant", "delay": "1h" }, "delete": {} } ], "transitions": [] } ] } }

arnitolog avatar Nov 26 '19 08:11 arnitolog

Hi @arnitolog,

At which action or step is the error occurring at?

That error is from: https://github.com/opendistro-for-elasticsearch/index-management/blob/4eea94fe30627c461f84a86815c30a63e5ab8d20/src/main/kotlin/com/amazon/opendistroforelasticsearch/indexstatemanagement/ManagedIndexRunner.kt#L265

Which basically means that one of the executions attempted to "START" the step being executed, but was never able to finish it. It's possible if your data nodes restart during the middle of that execution time period.

We currently don't have an automatic retry for this specific part, because we don't know if the step finished or not, and if it's something non-idempotent then we don't want to retry it which is why we turn it over to the user to handle.

With that in mind, we could definitely add automatic retries on things that are idempotent/safe to eliminate the majority of cases this can happen in (like checking conditions for transitioning etc.).

dbbaughe avatar Nov 26 '19 18:11 dbbaughe

Hi @dbbaughe this can happen on different steps. I saw this error on "ingest" step (which is the first one) and on "search" (which is the second)

It will be good to have some retriable mechanism fo such cases, so the less manual work the better

arnitolog avatar Nov 27 '19 06:11 arnitolog

Some improvements that have been added to help with this:

https://github.com/opendistro-for-elasticsearch/index-management/pull/165 https://github.com/opendistro-for-elasticsearch/index-management/pull/209

We have a few further ideas that we will track in: https://github.com/opendistro-for-elasticsearch/index-management/issues/207

dbbaughe avatar May 08 '20 02:05 dbbaughe

This is still happening on opendistro 1.8.0 release Strangely enough alot of them just stay on "running"/"attempting to transition" also ism

gittygoo avatar Jun 30 '20 15:06 gittygoo

Hey @gittygoo,

Are you using this plugin independently or using ODFE 1.8? What's your cluster setup look like? Are the "Attempting to transition/Running" stuck even though the conditions are met? If so what are those conditions? Can you check if your cluster pending tasks are backed up: GET /_cluster/pending_tasks

Thanks

dbbaughe avatar Jun 30 '20 15:06 dbbaughe

@dbbaughe its an internal cluster with 2 nodes, using Opendistro 1.8

policy looks like this, this should rotate them daily until deletion... so yes conditions are met

{
    "policy": {
        "policy_id": "default_ism_policy",
        "description": "Default policy",
        "last_updated_time": 1590706756863,
        "schema_version": 1,
        "error_notification": null,
        "default_state": "hot",
        "states": [
            {
                "name": "hot",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "warm",
                        "conditions": {
                            "min_index_age": "1d"
                        }
                    }
                ]
            },
            {
                "name": "warm",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "cold",
                        "conditions": {
                            "min_index_age": "2d"
                        }
                    }
                ]
            },
            {
                "name": "cold",
                "actions": [],
                "transitions": [
                    {
                        "state_name": "delete",
                        "conditions": {
                            "min_index_age": "3d"
                        }
                    }
                ]
            },
            {
                "name": "delete",
                "actions": [
                    {
                        "delete": {}
                    }
                ],
                "transitions": []
            }
        ]
    }
}

Tasks are empty

{"tasks":[]}

gittygoo avatar Jun 30 '20 17:06 gittygoo

Hi @gittygoo,

A few things to check:

  • Can you do a GET <index>/_settings on one that should be rolled over just so we can confirm the "index.creation_date"?
  • Can you also confirm your cluster is not red as it would skip executions in that case.
  • Do you see any logs for the index in elasticsearch.log that would imply the job is actually running and just not evaluating the conditions to true? Trying to see if it's an issue possibly in ISM or Job Scheduler.

Thanks

dbbaughe avatar Jun 30 '20 18:06 dbbaughe

So here is an example:

  • Index (metricbeat-7.6.0-2020.06.23) with creation_date set to 1592870678351 (22/06/2020 19:04:38)
  • Cluster is green
  • Cant see any related logs on the elasticsearch log referring to this index

Anything else i should check ?

gittygoo avatar Jun 30 '20 18:06 gittygoo

@gittygoo, you can try to set the log level to debug and see if any logs pop up. Otherwise we can try to jumpstart the job scheduler and see if it starts working again. The job scheduler plugin will reschedule any job when either the job document is updated or the shard moves to a different node and needs to be rescheduled on the new node. So you can either manually move the .opendistro-ism-config index shards to a different node to force it or manually update the managed_index documents in that index (probably something like changing enabled to false and back to true). Unfortunately we don't have an API to forcefully reschedule jobs.. it can be something we take as an action item to add.

dbbaughe avatar Jun 30 '20 18:06 dbbaughe

The way i connect the indexes to the ism template is via the index templates. So i can assume removing all the current "Managed indices" and then waiting 3 more days to see if the rotations went fine should achieve the same as your "jumpstart" idea? since the new indexes would automatically be assigned that policy based on their names. if so i will proceed to delete them and wait

gittygoo avatar Jun 30 '20 20:06 gittygoo

If you removed the current policy_ids from the indices it would delete the internal jobs (Managed Indices). And then you could try re-adding them to those indices and see if it goes through. Not sure if I followed the "waiting 3 more days" part.

dbbaughe avatar Jun 30 '20 22:06 dbbaughe

We are experiencing the same issues

OrangeTimes avatar Jul 01 '20 11:07 OrangeTimes

Hi @OrangeTimes,

The same issue as in "Previous action not able to update IndexMetaData" or similar to gittygoo where they have jobs that don't appear to be running anymore?

Can you also give a bit more information about what your cluster setup looks like (ODFE vs AmazonES, what version, # of nodes, etc.) and any more details about the issue you're experiencing.

dbbaughe avatar Jul 01 '20 16:07 dbbaughe

@dbbaughe similar to gittygoo Some indices are in Active and some in Failed state. Our index managament page looks pretty much the same

OrangeTimes avatar Jul 03 '20 12:07 OrangeTimes

Experiencing the same issue here, though possibly partly our own doing. We switched to ODFE last night and blanket-applied a policy to our existing indices, then very quickly decided to apply a different policy instead. This morning I checked the indices and about 90% of them show "Previous action was not able to update IndexMetaData", with the last action being Force Merge. Tried retrying the failed step but that didn't work, now I'm trying removing the policy altogether and reapplying it to try and jog the index.

Edit: This didn't work either, nor did retrying the policy from a specified state. Any more suggestions to debug or jog things are appreciated as we're now stuck with quite a lot of indices in this failed state.

Here's a little more info on our setup: ODFE v1.8.0 7 nodes (6 hot, 1 cold) Our policy transitions indices to the cold node first in a warm state after 2 days, then to a cold state after either a week or a month depending on the policy. During the warm phase the indices are force-merged, replicas removed, made read-only, and reallocated in that order.

Not sure if removing and attaching a different policy before the first one was complete is what broke things, but whatever the cause I've not yet been able to fix them. Happy to provide any additional information.

samling avatar Jul 07 '20 18:07 samling