valkey icon indicating copy to clipboard operation
valkey copied to clipboard

Avoid shard id update of replica if not matching with primary shard id

Open hpatro opened this issue 1 year ago • 23 comments

Shard_id shouldn't be updated for a replica if the shard_id for the primary is different.

During cluster setup, the shard id gets established through extensions data propagation and if the engine crashes/restarts while the reconciliation of shard id is in place, there is a possibility of corrupted config file and leads to failure of engine restart.

Scenario:

Let's say there are two nodes in a cluster i.e. Node A and Node B. All the admin operation is performed on Node B. Node A and Node B finish handshake and haven't shared the extensions information yet. Node B is made a replica of Node A. As part of Node B sharing the slaveof information, it also share(s) the temporary shard-id. During the regular packet processing in Node A, while handling the replication information, the shard id of Node A get(s) applied to Node B. And during the extensions processing in Node A, the shard id passed by Node B is applied which diverges from the shard id of Node A. A crash/restart followed by it leads to unrecoverable corrupted cluster configuration file state.

image

hpatro avatar May 29 '24 21:05 hpatro

I am not sure I understand the event sequence that leads to a corrupt state. can you elaborate?

The change makes sense to me. Essentially with this change there is now an order in which the shard-id is updated in a shard: primary first and replicas next.

btw, this change also requires us to sequence the assignment of the primary before the invocation of updateShardId. This seems to be the case already at https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L3092 and https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L5194.

There are some timeout failures in the test pass though. that is a bit surprising.

PingXie avatar May 31 '24 04:05 PingXie

The scenario is slightly difficult to explain, I've tried my best to depict it (updated the main comment). @PingXie / @madolson have a look.

hpatro avatar Jun 03 '24 19:06 hpatro

Codecov Report

Attention: Patch coverage is 80.00000% with 1 line in your changes missing coverage. Please review.

Project coverage is 71.06%. Comparing base (89d4577) to head (bd1a0bb). Report is 5 commits behind head on unstable.

Files with missing lines Patch % Lines
src/cluster_legacy.c 80.00% 1 Missing :warning:
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable     #573      +/-   ##
============================================
+ Coverage     70.97%   71.06%   +0.09%     
============================================
  Files           123      123              
  Lines         65937    65937              
============================================
+ Hits          46797    46857      +60     
+ Misses        19140    19080      -60     
Files with missing lines Coverage Δ
src/cluster_legacy.c 86.20% <80.00%> (+0.26%) :arrow_up:

... and 15 files with indirect coverage changes

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

codecov[bot] avatar Jun 04 '24 18:06 codecov[bot]

unit/cluster/manual-takeover seems to get stuck on the CI. Unable to reproduce locally so far. Trying to understand why it gets stuck sometime with this change.

hpatro avatar Jun 04 '24 19:06 hpatro

There are some timeout failures in the test pass though. that is a bit surprising.

From further investigation, the timeout failure happens from an infinite while loop within this block.

clusterNode *clusterNodeGetPrimary(clusterNode *node) {
    while (node->replicaof != NULL) node = node->replicaof;
    return node;
}

https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L5855C1-L5858C2

Looks like there could be temporary invalid state in cluster where node(s) can be pointing to each other as primary/replica. We could take two approaches to this infinite loop:

  1. Deep dive into why the invalid state is reached (cyclic replication state).
  2. We could avoid this loop as chained replication isn't a valid configuration in cluster mode.

hpatro avatar Jun 10 '24 18:06 hpatro

Deep dive into why the invalid state is reached (cyclic replication state).

We have had multiple of these issues in the past, and I think we always tried to figure it out. Maybe we should use this chance to add a helper method for setting the replicaof so that we check for loops.

madolson avatar Jun 11 '24 16:06 madolson

Deep dive into why the invalid state is reached (cyclic replication state).

We have had multiple of these issues in the past, and I think we always tried to figure it out. Maybe we should use this chance to add a helper method for setting the replicaof so that we check for loops.

And if we detect a loop, do we crash?

hpatro avatar Jun 12 '24 20:06 hpatro

And if we detect a loop, do we crash?

Maybe we debug assert crash (as in only crash during a test). For normal production, we unwind we maybe ignore it and wait for the other node to update us.

madolson avatar Jun 12 '24 20:06 madolson

The scenario is slightly difficult to explain, I've tried my best to depict it (updated the main comment). @PingXie / @madolson have a look.

Great diagram! Thanks @hpatro. This helps a lot.

PingXie avatar Jun 13 '24 06:06 PingXie

And if we detect a loop, do we crash?

Maybe we debug assert crash (as in only crash during a test). For normal production, we unwind we maybe ignore it and wait for the other node to update us.

debugAssert is reasonable but I don't think we should crash the server just because there is a loop. In fact, we have logic to break the loop already. I will suggest a fix in #609

PingXie avatar Jun 13 '24 18:06 PingXie

@hpatro Sorry for taking so long to circle back on this, the DCO was failing last time and I forgot to ping you to update. I think this is good to merge otherwise.

madolson avatar Jul 01 '24 05:07 madolson

@madolson Had to force push. PTAL.

hpatro avatar Jul 01 '24 18:07 hpatro

https://github.com/valkey-io/valkey/actions/runs/9768739963

madolson avatar Jul 02 '24 22:07 madolson

We actually hit the replication cycle assert rather consistently in the test run @madolson shared above. This is something that I haven't seen before.

*** Crash report found in valkey_2/log.txt ***
=== VALKEY BUG REPORT START: Cut & paste starting from here ===
44713:M 02 Jul 2024 22:52:22.118 # === ASSERTION FAILED ===
44713:M 02 Jul 2024 22:52:22.118 # ==> cluster_legacy.c:5879 'primary->replicaof == ((void *)0)' is not true

PingXie avatar Jul 03 '24 06:07 PingXie

We actually hit the replication cycle assert rather consistently in the test run @madolson shared above. This is something that I haven't seen before.


*** Crash report found in valkey_2/log.txt ***

=== VALKEY BUG REPORT START: Cut & paste starting from here ===

44713:M 02 Jul 2024 22:52:22.118 # === ASSERTION FAILED ===

44713:M 02 Jul 2024 22:52:22.118 # ==> cluster_legacy.c:5879 'primary->replicaof == ((void *)0)' is not true

Yeah, this change invokes the API more frequently. Someone needs to deep dive further to understand how we reach this state.

hpatro avatar Jul 03 '24 15:07 hpatro

Yeah, this change invokes the API more frequently. Someone needs to deep dive further to understand how we reach this state.

I deep dived it with an AWS engineer last week, I have a partial fix and will post it early next week.

madolson avatar Jul 06 '24 18:07 madolson

I took a look too and realized it’s a regression introduced by my slot migration PR #445. This change started allowing a replica to report its primary’s slot states and trigger clusterUdpateSlotsConfigWith.

image

PR #445 - Slot Migration Changes.

Here's what I think happens in these test failures involving a 3-node shard:

[T1] - Node A, B, and C are in the same shard with A as the primary. [T2] - Node A loses its primaryship to B via a graceful/manual failover. [T3] - After winning the election, B broadcasts the news to every node in the cluster, including C. [T4] - C receives B's latest PING message and correctly registers B as its new primary. [T5] - C then sends a new PING message to A, claiming B is its primary with all the slots. [T6] - A still hasn't received B's broadcast message from [T3], and C's PING message from [T4] arrives at A. And this is where things go wrong—a replicaof cycle is created.

At this point, A still thinks it’s the primary of the shard, and B -> replicaof == A. Since C is still a replica (as before), the role change handling logic doesn’t apply. So, A enters clusterUdpateSlotsConfigWith using C’s slot information (which is up to date with B’s). More importantly, B is passed in as the sender while at the same time A assumes B -> replicaof == A. The slot ownership update logic correctly gives the ownership of the slots to B. Now because A loses all its slots to B, who is in the same shard with a higher config epoch, this demotes A to a replica of the winner, B. And now with this PR, we set A -> replicaof = B, completing the replicaof cycle.

PingXie avatar Jul 07 '24 01:07 PingXie

This still fails after merging #754 due to primary-replica cycle. Still needs deep dive.

hpatro avatar Jul 17 '24 16:07 hpatro

Interesting. @madolson can you share your findings when you get a chance? I assume it is different from #754?

PingXie avatar Jul 17 '24 20:07 PingXie

I think we should consider if this PR is still needed if/when we reduce the delay (see: https://github.com/valkey-io/valkey/pull/778) - This was a great PR and moved mountains in terms of figuring out what was wrong, but it would be great to reduce the delay entirely

Instead of mitigating the effects of shard ID not being stabilized, we can instead connect the needed flags to the node immediately during the handshake, thus avoiding this situation entirely. This approach will also have the benefit of increasing the speed of stabilization, as there will be less "hops" needed to reach a shard ID consensus.

bentotten avatar Aug 29 '24 22:08 bentotten

Interesting. @madolson can you share your findings when you get a chance? I assume it is different from #754?

I have a theory about how this could happen.

  1. We had a stale PONG message issue, which was fixed in commit https://github.com/valkey-io/valkey/commit/28976a9003c6dd5cdd7225c5bc90743b4fcde13c https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3271
  2. However we didn't bail after detecting this stale message. We proceed to https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311
  3. And then update sender's replicaof based on the stale message at: https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317

Now, imagine the following scenario

[T0] Three nodes: primary A with replica B, and an observer node N [T1] B's PONG message to N claiming A is its primary gets stuck somewhere on the way to N [T2] B becomes primary after a manual failover and then notifies A (and N but that message will get stuck behind the PONG message at T1) [T3] A becomes a replica of B [T4] A, now a replica of B, sends PING to N, which goes through the following steps that end up "promote" B to a primary, indirectly

  1. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3257
  2. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3267
  3. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3269
  4. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3281 and sets A's replicaof to B
  5. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311
  6. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317 [T5] Finally, B's PONG message to N from [T1] arrives on N and it goes through the following steps
  7. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3257
  8. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3264 Due to step 4, B got promoted to primary, implicitly
  9. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3267 However the epoch is stale, which is correctly handled
  10. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3271
  11. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3273 We don't bail but instead continue to
  12. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311 and finally updates B->replicaof to A, completing the loop
  13. https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317

I have seen stale messages in the past and I also notice that the latest failure in the codecov run, which could alter the timing quite a bit so I think this theory is very plausible.

The fix would be to bail immediately after detecting the stale message

https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3273

BTW, we have another undetected stale message issue (#798)

PingXie avatar Aug 30 '24 01:08 PingXie

I think we should consider if this PR is still needed if/when we reduce the delay (see: #778) - This was a great PR and moved mountains in terms of figuring out what was wrong, but it would be great to reduce the delay entirely

Instead of mitigating the effects of shard ID not being stabilized, we can instead connect the needed flags to the node immediately during the handshake, thus avoiding this situation entirely. This approach will also have the benefit of increasing the speed of stabilization, as there will be less "hops" needed to reach a shard ID consensus.

Yeah I think we will need both. Let me pick up my slack next ... :(

PingXie avatar Aug 30 '24 01:08 PingXie

The tests still fail for replicaof loops. I think we need a fix for #1015 first.

PingXie avatar Sep 11 '24 04:09 PingXie

With the stale gossip message ignore change merged https://github.com/valkey-io/valkey/pull/1777, this looks safe to get in.

hpatro avatar Apr 15 '25 23:04 hpatro

@PingXie / @enjoy-binbin Any further comments or shall we merge it ?

hpatro avatar Apr 18 '25 16:04 hpatro

Finally! Go for it 😄

PingXie avatar Apr 18 '25 17:04 PingXie