etcd icon indicating copy to clipboard operation
etcd copied to clipboard

Removed etcd member failed to stop on stuck disk

Open chaochn47 opened this issue 3 years ago • 9 comments

What happened?

etcd failed to stop and stuck in stopping state after it was removed from membership. It went unresponsive for any requests sent to it.

What did you expect to happen?

I expect etcd can graceful terminate itself.

How can we reproduce it (as minimally and precisely as possible)?

It was observed in an availability zone outage. The reproduce can be like the following

  1. provision a 3 member cluster.
  2. inject delay in disk fsync operation (in our setup, its disk operations went through a special network link).
  3. remove this member right after.

Here is a similar reproduce https://github.com/etcd-io/etcd/issues/13527 but does not have member removal fault injection.

Anything else we need to know?

{
    "level": "info",
    "ts": "2022-07-10T17:33:16.789Z",
    "caller": "raft/raft.go:859",
    "msg": "f858bb9b873f65a1 [term: 4] received a MsgApp message with higher term from fb4a15d865627b24 [term: 5]"
}

{
    "level": "info",
    "ts": "2022-07-10T17:33:16.789Z",
    "caller": "raft/raft.go:700",
    "msg": "f858bb9b873f65a1 became follower at term 5"
}

{
    "level": "info",
    "ts": "2022-07-10T17:33:16.789Z",
    "caller": "raft/node.go:327",
    "msg": "raft.node: f858bb9b873f65a1 changed leader from f858bb9b873f65a1 to fb4a15d865627b24 at term 5"
}

{
    "level": "warn",
    "ts": "2022-07-10T17:34:08.643Z",
    "caller": "etcdserver/server.go:1101",
    "msg": "server error",
    "error": "the member has been permanently removed from the cluster"
}

{
    "level": "warn",
    "ts": "2022-07-10T17:34:08.643Z",
    "caller": "etcdserver/server.go:1102",
    "msg": "data-dir used by this member must be removed"
}

etcd: /health error; QGET failed etcdserver: server stopped (status code 503)

{
    "level": "info",
    "ts": "2022-07-10T17:34:16.007Z",
    "caller": "traceutil/trace.go:145",
    "msg": "trace[1220767039] range",
    "detail": "{range_begin:/registry/health; range_end:; }",
    "duration": "2.000149186s",
    "start": "2022-07-10T17:34:14.006Z",
    "end": "2022-07-10T17:34:16.007Z",
    "steps": [
        "trace[1220767039] 'agreement among raft nodes before linearized reading'  (duration: 2.000062825s)"
    ]
}

many more apply request took too long because "error":"context canceled" and continued for almost 2 hours.

{
    "level": "warn",
    "ts": "2022-07-10T19:22:09.843Z",
    "caller": "wal/wal.go:808",
    "msg": "slow fdatasync",
    "took": "1h48m54.55408532s",
    "expected-duration": "1s"
}

  rafthttp pipelines termination

{
    "level": "info",
    "ts": "2022-07-10T19:22:09.843Z",
    "caller": "rafthttp/pipeline.go:86",
    "msg": "stopped HTTP pipelining with remote peer",
    "local-member-id": "f858bb9b873f65a1",
    "remote-peer-id": "772211084901eb47"
}
{
    "level": "info",
    "ts": "2022-07-10T19:22:09.843Z",
    "caller": "rafthttp/pipeline.go:86",
    "msg": "stopped HTTP pipelining with remote peer",
    "local-member-id": "f858bb9b873f65a1",
    "remote-peer-id": "b9f9f861d890475a"
}

...

etcd server stopped with exit code 0

Etcd version (please run commands below)

$ etcd --version
v3.4.18

$ etcdctl version
v3.4.18

Etcd configuration (command line flags or environment variables)

{
    "level": "info",
    "ts": "2022-06-17T10:05:05.264Z",
    "caller": "embed/etcd.go:303",
    "msg": "starting an etcd server",
    "etcd-version": "3.4.18",
    "git-sha": "c1c4e2d91",
    "go-version": "go1.15.15",
    "go-os": "linux",
    "go-arch": "amd64",
    "max-cpu-set": 2,
    "max-cpu-available": 2,
    "member-initialized": false,
    "name": "zombie-node-name",
    "data-dir": "/mnt/xvdb/datadir",
    "wal-dir": "/mnt/xvdb/datadir/member/wal",
    "wal-dir-dedicated": "/mnt/xvdb/datadir/member/wal",
    "member-dir": "/mnt/xvdb/datadir/member",
    "force-new-cluster": false,
    "heartbeat-interval": "100ms",
    "election-timeout": "1s",
    "initial-election-tick-advance": true,
    "snapshot-count": 10000,
    "snapshot-catchup-entries": 5000,
    "initial-advertise-peer-urls": [
        "http://zombie-node-ip:2380"
    ],
    "listen-peer-urls": [
        "http://0.0.0.0:2380"
    ],
    "advertise-client-urls": [
        "http://zombie-node-ip:2379"
    ],
    "listen-client-urls": [
        "http://0.0.0.0:2379"
    ],
    "listen-metrics-urls": [],
    "cors": [
        "*"
    ],
    "host-whitelist": [
        "*"
    ],
    "initial-cluster": "node-a=http://node-a-ip:2380,node-b=http://node-b-ip:2380,zombie-node-name=http://zombie-node-ip:2380",
    "initial-cluster-state": "existing",
    "initial-cluster-token": "cd5b97cb-7577-4b42-8804-673b57d20f93",
    "quota-size-bytes": 8388608000,
    "pre-vote": true,
    "initial-corrupt-check": false,
    "corrupt-check-time-interval": "0s",
    "auto-compaction-mode": "periodic",
    "auto-compaction-retention": "0s",
    "auto-compaction-interval": "0s",
    "discovery-url": "",
    "discovery-proxy": ""
}

Etcd debug information (please run commands blow, feel free to obfuscate the IP address or FQDN in the output)

$ etcdctl member list -w table
# paste output here

$ etcdctl --endpoints=<member list> endpoint status -w table
# paste output here

Relevant log output

No response

chaochn47 avatar Aug 12 '22 00:08 chaochn47

from reading the code, it seems etcd server was stuck in

https://github.com/etcd-io/etcd/blob/72d3e382e73c1a2a4781f884fb64792af3242f22/etcdserver/server.go#L1020-L1025

It was stuck there because v2 health check handler returned etcdserver: server stopped here and it was the reason of close(s.stopping)

It looks like fsync was stuck in the middle. Looked into 2022-07-10T19:22:09.843Z - 1h48m54.55408532s is around 2022-07-10T17:33:15.289Z

I expect despite of disk failures/latency, close(s.done) will be called and process to be exited.

Much appreciate of any insights, thanks!!

chaochn47 avatar Aug 12 '22 01:08 chaochn47

Okay. I get a repro working by injecting sleeping at raftAfterSave failpoint followed by member removal.

The fifo scheduler stop is the culprit and getting stuck https://github.com/etcd-io/etcd/blob/72d3e382e73c1a2a4781f884fb64792af3242f22/pkg/schedule/schedule.go#L119-L125

Will dig in a little deeper to understand why..

chaochn47 avatar Aug 15 '22 06:08 chaochn47

The raft loop was stuck in the middle and apply routine is waiting for it.

https://github.com/etcd-io/etcd/blob/72d3e382e73c1a2a4781f884fb64792af3242f22/etcdserver/server.go#L1119-L1122

I think the proper fix should be cancelling this apply successfully in the shutdown scenario even if disk write was stuck. Right now, the context is ignored.

https://github.com/etcd-io/etcd/blob/72d3e382e73c1a2a4781f884fb64792af3242f22/etcdserver/server.go#L1059-L1060

Any thoughts? @ahrtr @serathius @spzala

chaochn47 avatar Aug 15 '22 21:08 chaochn47

Just a quick question before I have a deep dive, can you reproduce this on release-3.5 or main?

ahrtr avatar Aug 15 '22 22:08 ahrtr

I think I can but I haven't yet tried this. Will report in a few mins on release-3.5 or main with the reproduce script shared.

Here is one for v3.4.18 reproduce.txt

chaochn47 avatar Aug 15 '22 22:08 chaochn47

Here is one for v3.5.4 reproduce-3.5.txt

Some side effect I observed.

2 leaders at a time but actually only one leader!

$ etcdctl endpoint status --cluster -w table
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT        |        ID        | VERSION | STORAGE VERSION | DB SIZE | DB SIZE IN USE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
|  http://127.0.0.1:2379 | 8211f1d0f64f3269 |  3.5.4 |                 |   20 kB |          16 kB |     false |      false |         2 |          8 |                  8 |        |
| http://127.0.0.1:22379 | 91bc3c398fb3c146 |  3.5.4 |                 |   20 kB |          16 kB |      true |      false |         2 |          8 |                  8 |        |
| http://127.0.0.1:32379 | fd422379fda50e48 |  3.5.4 |                 |   20 kB |          16 kB |     false |      false |         2 |          8 |                  8 |        |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
 
$ curl http://127.0.0.1:1234/etcdserver/raftAfterSave -XPUT -d'sleep(600000)'
 
$ etcdctl --endpoints http://127.0.0.1:22379,http://127.0.0.1:32379 member remove 8211f1d0f64f3269
Member 8211f1d0f64f3269 removed from cluster ef37ad9dc622a7c4

$ etcdctl endpoint status -w table --endpoints http://127.0.0.1:2379,http://127.0.0.1:22379,http://127.0.0.1:32379
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
|        ENDPOINT        |        ID        | VERSION | STORAGE VERSION | DB SIZE | DB SIZE IN USE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+
|  http://127.0.0.1:2379 | 8211f1d0f64f3269 |   3.5.4 |                 |   20 kB |          16 kB |      true |      false |         2 |          8 |                  8 |        |
| http://127.0.0.1:22379 | 91bc3c398fb3c146 |   3.5.4 |                 |   20 kB |          16 kB |     false |      false |         3 |         10 |                 10 |        |
| http://127.0.0.1:32379 | fd422379fda50e48 |   3.5.4 |                 |   20 kB |          16 kB |      true |      false |         3 |         10 |                 10 |        |
+------------------------+------------------+---------+-----------------+---------+----------------+-----------+------------+-----------+------------+--------------------+--------+

To elaborate more, the symptom we got is a stale watch connection was not cleaned up (it is supposed to be) with member removal. So client cache was always outdated...


FAILPOINTS=1 ./build

needs to be updated to 

FAILPOINTS=1 ./build.sh  in release 3.5

chaochn47 avatar Aug 16 '22 00:08 chaochn47

It might not be safe to forcibly terminate the applying workflow.

The most feasible solution for now is to print log repeatedly at server.go#L930 and server.go#L978 so as to provide more visibility on the issue.

ahrtr avatar Aug 18 '22 05:08 ahrtr

Please note that the etcdctl endpoint status isn't an atomic operation, and it gets all members status one by one. If there is a lead changing in-between, then you may get two leaders in the end. So it could be expected behavior. But you should can only see this with very low possibility, and it should be a temporary "issue".

ahrtr avatar Aug 19 '22 22:08 ahrtr

But you should can only see this with very low possibility, and it should be a temporary "issue"

Yeah, it only happens when disk IO is stuck in the middle. Usually it is caused by a data center outage.

FYI, We are deploying a fix to the local monitoring agent to forcibly stop the server given it's already removed from the membership.

However, it could have been done in etcd IMHO.

chaochn47 avatar Aug 22 '22 19:08 chaochn47

Looks more like a feature request to force etcd to shutdown after being removed from cluster. My first thought is that this should be part of admin operation to kill etcd on disk failure.

serathius avatar Nov 15 '22 14:11 serathius

@chaochn47 Has this issue been resolved? I also encountered the problem of abnormal fluctuations in etcd, which is similar to your situation.

HuangQAQ avatar Mar 27 '24 09:03 HuangQAQ