etcd icon indicating copy to clipboard operation
etcd copied to clipboard

3.4.18 etcd data inconsistency

Open chaochn47 opened this issue 3 years ago • 23 comments

What happened?

A 3 etcd cluster returned inconsistent data.

'bad' Node

{
   "level": "info",
   "ts": "2022-06-09T09:21:10.368Z",
   "caller": "mvcc/kvstore_compaction.go:56",
   "msg": "finished scheduled compaction",
   "compact-revision": 165517829,
   "took": "13.589883ms"
}
{
    "level": "fatal",
    "ts": "2022-06-09T09:22:13.472Z",
    "caller": "mvcc/kvstore_txn.go:150",
    "msg": "range failed to find revision pair",
    "revision-main": 165519108,
    "revision-sub": 0,
    "stacktrace": "
          go.etcd.io/etcd/mvcc.(storeTxnRead).rangeKeys (http://go.etcd.io/etcd/mvcc.(storeTxnRead).rangeKeys)
             /home/ec2-user/workplace/etcd/src/etcd/mvcc/kvstore_txn.go:150
          go.etcd.io/etcd/mvcc.(storeTxnRead).Range (http://go.etcd.io/etcd/mvcc.(storeTxnRead).Range)
              /home/ec2-user/workplace/etcd/src/etcd/mvcc/kvstore_txn.go:51
          go.etcd.io/etcd/mvcc.(metricsTxnWrite).Range (http://go.etcd.io/etcd/mvcc.(metricsTxnWrite).Range)
              /home/ec2-user/workplace/etcd/src/etcd/mvcc/metrics_txn.go:37
          go.etcd.io/etcd/etcdserver.(applierV3backend).Range (http://go.etcd.io/etcd/etcdserver.(applierV3backend).Range)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/apply.go:286
          go.etcd.io/etcd/etcdserver.(EtcdServer).Range.func3 (http://go.etcd.io/etcd/etcdserver.(EtcdServer).Range.func3)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/v3_server.go:121
          go.etcd.io/etcd/etcdserver.(EtcdServer).doSerialize (http://go.etcd.io/etcd/etcdserver.(EtcdServer).doSerialize)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/v3_server.go:600
          go.etcd.io/etcd/etcdserver.(EtcdServer).Range (http://go.etcd.io/etcd/etcdserver.(EtcdServer).Range)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/v3_server.go:122
          go.etcd.io/etcd/etcdserver/api/v3rpc.(kvServer).Range (http://go.etcd.io/etcd/etcdserver/api/v3rpc.(kvServer).Range)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/api/v3rpc/key.go:52
          go.etcd.io/etcd/etcdserver/etcdserverpb._KV_Range_Handler.func1 (http://go.etcd.io/etcd/etcdserver/etcdserverpb._KV_Range_Handler.func1)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/etcdserverpb/rpc.pb.go:3554
          github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1 (http://github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1)
              /home/ec2-user/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:31 (http://github.com/grpc-ecosystem/[email protected]/chain.go:31)
          github.com/grpc-ecosystem/go-grpc-prometheus.(ServerMetrics).UnaryServerInterceptor.func1 (http://github.com/grpc-ecosystem/go-grpc-prometheus.(ServerMetrics).UnaryServerInterceptor.func1)
              /home/ec2-user/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/server_metrics.go:107 (http://github.com/grpc-ecosystem/[email protected]/server_metrics.go:107)
          github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1 (http://github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1)
              /home/ec2-user/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 (http://github.com/grpc-ecosystem/[email protected]/chain.go:34)
          go.etcd.io/etcd/etcdserver/api/v3rpc.newUnaryInterceptor.func1 (http://go.etcd.io/etcd/etcdserver/api/v3rpc.newUnaryInterceptor.func1)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/api/v3rpc/interceptor.go:70
          github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1 (http://github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1)
              /home/ec2-user/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:34 (http://github.com/grpc-ecosystem/[email protected]/chain.go:34)
          go.etcd.io/etcd/etcdserver/api/v3rpc.newLogUnaryInterceptor.func1 (http://go.etcd.io/etcd/etcdserver/api/v3rpc.newLogUnaryInterceptor.func1)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/api/v3rpc/interceptor.go:77
          github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1 (http://github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1)
              /home/ec2-user/go/pkg/mod/github.com/grpc-ecosystem/[email protected]/chain.go:39 (http://github.com/grpc-ecosystem/[email protected]/chain.go:39)
          go.etcd.io/etcd/etcdserver/etcdserverpb._KV_Range_Handler (http://go.etcd.io/etcd/etcdserver/etcdserverpb._KV_Range_Handler)
              /home/ec2-user/workplace/etcd/src/etcd/etcdserver/etcdserverpb/rpc.pb.go:3556
          google.golang.org/grpc.(Server).processUnaryRPC (http://google.golang.org/grpc.(Server).processUnaryRPC)
              /home/ec2-user/go/pkg/mod/google.golang.org/[email protected]/server.go:1024 (http://google.golang.org/[email protected]/server.go:1024)
          google.golang.org/grpc.(Server).handleStream (http://google.golang.org/grpc.(Server).handleStream)
              /home/ec2-user/go/pkg/mod/google.golang.org/[email protected]/server.go:1313 (http://google.golang.org/[email protected]/server.go:1313)
          google.golang.org/grpc.(Server).serveStreams.func1.1 (http://google.golang.org/grpc.(Server).serveStreams.func1.1)
              /home/ec2-user/go/pkg/mod/google.golang.org/[email protected]/server.go:722 (http://google.golang.org/[email protected]/server.go:722)“
}

kube-controller-manager continuously cannot update its lock due to partial data in etcd.

W0630 19:46:09.303166      10 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted

E0630 19:45:38.273530      10 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"converting (v1.ConfigMap) to (core.Endpoints): Subsets not present in src"}

What did you expect to happen?

The error range failed to find revision pair should never ever happen because read transaction will either read from write buffer or boltdb.

We reported this case as a reference.

How can we reproduce it (as minimally and precisely as possible)?

Unfortunately, I don't have any clue to reproduce it.

Anything else we need to know?

  1. mvcc revision diverged between nodes.
Screen Shot 2022-07-05 at 12 28 23 PM
  1. db size diverged
Screen Shot 2022-07-05 at 12 28 30 PM
  1. compact failed to apply because of the revision divergence.
  2. consistency checker was not enabled in this old cluster to help oncall operators to retain WAL logs in time.

Etcd version (please run commands below)

$ etcd --version
v3.4.18


Etcd configuration (command line flags or environment variables)

paste your configuration here

Etcd debug information (please run commands blow, feel free to obfuscate the IP address or FQDN in the output)

$ etcdctl member list -w table
# paste output here

$ etcdctl --endpoints=<member list> endpoint status -w table
# paste output here

Relevant log output

No response

chaochn47 avatar Jul 11 '22 23:07 chaochn47

Please provide the info below,

etcdctl  endpoint status -w json --cluster
# Run command below on each member
./etcd-dump-db  iterate-bucket  ~/tmp/etcd/infra1.etcd/  meta  --decode

ahrtr avatar Jul 13 '22 02:07 ahrtr

Thanks for the comment!

Unfortunately the point-in-time db state on each member was lost.

Corruption check on the new clusters will alarm and print CI in disk if this issue happened again.

chaochn47 avatar Jul 13 '22 17:07 chaochn47

if this issue happened again.

Is it easy to reproduce this issue in your environment (dev or production?) ?

Corruption check on the new clusters

Did you just create a new cluster or replace one problematic member with a new one?

ahrtr avatar Jul 14 '22 01:07 ahrtr

Is it easy to reproduce this issue in your environment (dev or production?) ?

It happened in a production cluster created on demand.

I am not even sure where to start the reproduce the error range failed to find revision pair. I suspect this corruption may happen in boltdb or fs layer or underlying disk.

Did you just create a new cluster or replace one problematic member with a new one?

The oncall operator replaced the problematic member with a new one and everything recovered..

chaochn47 avatar Jul 14 '22 17:07 chaochn47

The oncall operator replaced the problematic member with a new one and everything recovered..

Thanks for the feedback. If just one member has issue, in other words majorities are still working well, then most likely the db file of the problematic member was corrupted. Usually you will see panic when you try to execute boltDB CMD in this case, something like 13406. Had the oncall operator in your company tried any BoltDB CMD and confirmed this?

ahrtr avatar Jul 14 '22 21:07 ahrtr

Yeah, I downloaded the snapshot taken from the impacted node before and after the panic.

I ran some bbolt cmds like bbolt check and some other subcommands (stats) on both db files. The results returned OK and no panic like 13406

I'd like to keep this issue open until second occurrence with WAL log presence.

chaochn47 avatar Jul 21 '22 00:07 chaochn47

@chaochn47 do you still have data in your metric system? Can you post other metrics? Seams like something happened on 06/09 etcd_server_is_leader - wondering if there was leader change on 06/09 or if there were too many leader changes indicating network/load issue.

etcd_server_heartbeat_send_failures_total to make sure there were not hb failures etcd_server_snapshot_apply_in_progress_total just in case there was some anomaly.

rate(etcd_debugging_mvcc_compact_revision[1m]) - to see if all nodes competing at the same rate. etcd_disk_defrag_inflight - to see if there were any correlated defrags around 06/09

It's interesting why revision divergence happened on 06/09 but size divergence on 06/19.

Was cluster under heavy load/different load pattern?

lavacat avatar Jul 31 '22 22:07 lavacat

@lavacat thanks for taking a look!

We do have data in the metrics and logging system.

etcd_server_is_leader - wondering if there was leader change on 06/09 or if there were too many leader changes indicating network/load issue.

There is a leadership change around 5 mins before this event. However, we started to see 3 more similar issues happening on other production clusters. They does not have correlation to leader changes.

etcd_server_heartbeat_send_failures_total to make sure there were not hb failures etcd_server_snapshot_apply_in_progress_total just in case there was some anomaly.

No heartbeat failures and no snapshot apply in progress.

rate(etcd_debugging_mvcc_compact_revision[1m]) - to see if all nodes competing at the same rate. etcd_disk_defrag_inflight - to see if there were any correlated defrags around 06/09

Compaction is driven by kube-apiserver every 5mins. The compact_revision is consistent until the event. Defrag is performed by another on-host agent. It does not perform defrag because db_size - db_size_in_use does not breach 200MiB threshold.

Was cluster under heavy load/different load pattern?

I think the cluster is under light load. The mvcc range, txn, put rate is low and db size is under 25MiB.

chaochn47 avatar Aug 01 '22 17:08 chaochn47

I suspected the write Txn failed to write back the buffer to read Txn buffer when a new concurrentReadTxn is created at the same time copied a stale read Txn buffer. But from the code perspective, this seems like not a issue.

The only possibility is the disk write failed and mmap'd data is not reflected somehow.. But disk write failure should have been poped up to the etcd at the same time..

But we are enabling corrupt check, alarming faster, retaining more WAL files, better tooling to investigate, etc. So hopefully next time there will be more information..

chaochn47 avatar Aug 01 '22 17:08 chaochn47

Provide a similar case we observed on September 12, 2023.

What happened?

A 3 etcd cluster returned inconsistent data.

The last 3 relevant logs before the first revision error occurred in the bad node. There is not panic log, such as range failed to find revision pair.

{
  "level": "info",
  "ts": "2023-09-12T13:30:55.970Z",
  "caller": "mvcc/index.go:189",
  "msg": "compact tree index",
  "revision": 540040293
}
{
 "level": "info",
 "ts": "2023-09-12T13:30:56.278Z",
 "msg": "finished scheduled compaction",
 "compact-revision": 540040293,
 "took": "300.432453ms"
}
{
 "level": "info",
 "ts": "2023-09-12T13:33:06.694Z",
 "msg": "purged",
 "path": "/mnt/<subpath>/member/snap/0000000000000025-00000000216876b7.snap"
}

kube-controller-manager continuously fail for mvcc revision error.

W0912 13:53:03.296333   11 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
E0912 13:49:48.143489   11 status.go:71] apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xb, desc:"etcdserver: mvcc: required revision is a future revision"}: etcdserver: mvcc: required revision is a future revision

Anything else we need to know?

  1. MVCC revision diverged between nodes. Db size in used diverged.
image

Etcd version (please run commands below)

$ etcd --version 3.4.20

miancheng7 avatar Sep 19 '23 17:09 miancheng7

It seems the names "revision divergency" and "db size total divergency" confused. m1-m2 seems actually to be db size total divergency.

Please

  • execute etcdctl endpoint status -w json --cluster, and provide the output.
  • Is it possible to provide the db files?

ahrtr avatar Sep 19 '23 18:09 ahrtr

Thanks, updated.

FYI,

  • execute etcdctl endpoint status -w json --cluster, and provide the output.
{
  "Endpoint": "http://xxx:2379/",
  "Status": {
    "header": {
      "cluster_id": 11216169436736817000,
      "member_id": 3380614885785619000,
      "revision": 540470663,
      "raft_term": 37
    },
    "version": "3.4.20",
    "dbSize": 401346560,
    "leader": 3587062029378754000,
    "raftIndex": 560998036,
    "raftTerm": 37,
    "raftAppliedIndex": 560998036,
    "dbSizeInUse": 207654912
  }
}
{
  "Endpoint": "http://xxx:2379/",
  "Status": {
    "header": {
      "cluster_id": 11216169436736817000,
      "member_id": 3587062029378754000,
      "revision": 540470663,
      "raft_term": 37
    },
    "version": "3.4.20",
    "dbSize": 401272832,
    "leader": 3587062029378754000,
    "raftIndex": 560998036,
    "raftTerm": 37,
    "raftAppliedIndex": 560998036,
    "dbSizeInUse": 207732736
  }
}
{
  "Endpoint": "http://xxx:2379/",
  "Status": {
    "header": {
      "cluster_id": 11216169436736817000,
      "member_id": 15037885980601823000,
      "revision": 540057523,
      "raft_term": 37
    },
    "version": "3.4.20",
    "dbSize": 401489920,
    "leader": 3587062029378754000,
    "raftIndex": 560998036,
    "raftTerm": 37,
    "raftAppliedIndex": 560998036,
    "dbSizeInUse": 192278528
  }
}
  • Is it possible to provide the db files?

We’re not allowed to share customer data.

miancheng7 avatar Sep 19 '23 19:09 miancheng7

  • Please compare the consisten_index of the three members using etcd-dump-db
$ ./etcd-dump-db iterate-bucket path-2-db meta --decode
key="scheduledCompactRev", value={2804621 0}
key="finishedCompactRev", value={2804621 0}
key="consistent_index", value=3050985
  • Could you share the complete log of three members?
  • To double check, the revision started to diverge starting from Sept. 11 13:30?

ahrtr avatar Sep 19 '23 19:09 ahrtr

Provide a case we observed on Nov 1, 2023.

What happened?

A 3 etcd cluster returned inconsistent data.

etcd version: 3.4.18

Run etcdctl get 3 netmasterpods on 2 normal nodes. But get 5 netmaster pods on the 1 bad node(we have deleted the 2 netmasterpods) image

A panic occurred during operation, it seems that the data is inconsistent after etcd restarting, no errors were detected at startup. ( I don't think it's related with #266 )

image

Anything else we need to know?

  1. The RaftIndexAppliedIndex of the 3 nodes is consistent
  2. Run bbolt check returns OK
  3. Run etcd-dump-db, there are indeed 5 netmaster pods in bbolt. But it seems that the revision gap is relatively large with the normal node, but I am not completely sure that the pod has not been deleted and rebuilt.
  4. experimental-initial-corrupt-check is disabled.

Bad node image

Normal node image

Tachone avatar Nov 07 '23 08:11 Tachone

Do you have any suggestions on the direction of investigation? I haven't been able to reproduce it yet. @ahrtr @serathius

Tachone avatar Nov 29 '23 12:11 Tachone

I am working on reproducing for bbolt data inconsistency. Will update it if I find out something. @Tachone

fuweid avatar Nov 30 '23 03:11 fuweid

@Tachone Would you mind sharing what experimental-backend-bbolt-freelist-type value you are using? thanks

fuweid avatar Nov 30 '23 03:11 fuweid

@Tachone Would you mind sharing what experimental-backend-bbolt-freelist-type value you are using? thanks

--experimental-backend-bbolt-freelist-type 'array', default value in 3.4.18 @fuweid

Tachone avatar Dec 04 '23 03:12 Tachone

@fuweid

I think there are some important information to help reproduce

  1. the machine may under heavy load when the problem occurred
  2. bbolt panic twice on different pages in 2 days, etcd restarted normally
  3. the bad etcd node has no consistent <rev,KeyValue> data in bbolt with other nodes since rev={main:1054916837, sub:0}

Tachone avatar Dec 04 '23 04:12 Tachone

@Tachone Thanks!

  1. the machine may under heavy load when the problem occurred

The machine wasn't restarted, is it correct? Is there any useful kernel message in dmesg?

  1. bbolt panic twice on different pages in 2 days, etcd restarted normally

After bbolt panic, etcd still can restart and serve requests?

  1. the bad etcd node has no consistent <rev,KeyValue> data in bbolt with other nodes since rev={main:1054916837, sub:0}

For the bad etcd node, after leader replicated the log, it can catch up with other two nodes?

fuweid avatar Dec 04 '23 04:12 fuweid

@fuweid

  1. Yes. But I did not find any obvious abnormal information in dmesg.
  2. Yes.
  3. Yes. The RaftIndex and AppliedIndex of this node is consistent with leader by 'endpoint status'

Tachone avatar Dec 04 '23 04:12 Tachone

I have seen couple of corruptions, but nothing following a single pattern that would suggest a application issue. I would attribute them mostly to memory stamping (page is written under wrong page id). Still investigating them would be useful to propose additional debug information and safeguards that would give us more insight into the problem.

My suggestion would be to separate the investigation, handle each report as independent issue as long as there is no strong suggestion that they have exactly the same root cause.

serathius avatar Dec 04 '23 08:12 serathius