valkey
valkey copied to clipboard
Avoid shard id update of replica if not matching with primary shard id
Shard_id shouldn't be updated for a replica if the shard_id for the primary is different.
During cluster setup, the shard id gets established through extensions data propagation and if the engine crashes/restarts while the reconciliation of shard id is in place, there is a possibility of corrupted config file and leads to failure of engine restart.
Scenario:
Let's say there are two nodes in a cluster i.e. Node A and Node B. All the admin operation is performed on Node B. Node A and Node B finish handshake and haven't shared the extensions information yet. Node B is made a replica of Node A. As part of Node B sharing the slaveof information, it also share(s) the temporary shard-id. During the regular packet processing in Node A, while handling the replication information, the shard id of Node A get(s) applied to Node B. And during the extensions processing in Node A, the shard id passed by Node B is applied which diverges from the shard id of Node A. A crash/restart followed by it leads to unrecoverable corrupted cluster configuration file state.
I am not sure I understand the event sequence that leads to a corrupt state. can you elaborate?
The change makes sense to me. Essentially with this change there is now an order in which the shard-id is updated in a shard: primary first and replicas next.
btw, this change also requires us to sequence the assignment of the primary before the invocation of updateShardId. This seems to be the case already at https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L3092 and https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L5194.
There are some timeout failures in the test pass though. that is a bit surprising.
The scenario is slightly difficult to explain, I've tried my best to depict it (updated the main comment). @PingXie / @madolson have a look.
Codecov Report
Attention: Patch coverage is 80.00000% with 1 line in your changes missing coverage. Please review.
Project coverage is 71.06%. Comparing base (
89d4577) to head (bd1a0bb). Report is 5 commits behind head on unstable.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| src/cluster_legacy.c | 80.00% | 1 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## unstable #573 +/- ##
============================================
+ Coverage 70.97% 71.06% +0.09%
============================================
Files 123 123
Lines 65937 65937
============================================
+ Hits 46797 46857 +60
+ Misses 19140 19080 -60
| Files with missing lines | Coverage Δ | |
|---|---|---|
| src/cluster_legacy.c | 86.20% <80.00%> (+0.26%) |
:arrow_up: |
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
- :package: JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
unit/cluster/manual-takeover seems to get stuck on the CI. Unable to reproduce locally so far. Trying to understand why it gets stuck sometime with this change.
There are some timeout failures in the test pass though. that is a bit surprising.
From further investigation, the timeout failure happens from an infinite while loop within this block.
clusterNode *clusterNodeGetPrimary(clusterNode *node) {
while (node->replicaof != NULL) node = node->replicaof;
return node;
}
https://github.com/valkey-io/valkey/blob/unstable/src/cluster_legacy.c#L5855C1-L5858C2
Looks like there could be temporary invalid state in cluster where node(s) can be pointing to each other as primary/replica. We could take two approaches to this infinite loop:
- Deep dive into why the invalid state is reached (cyclic replication state).
- We could avoid this loop as chained replication isn't a valid configuration in cluster mode.
Deep dive into why the invalid state is reached (cyclic replication state).
We have had multiple of these issues in the past, and I think we always tried to figure it out. Maybe we should use this chance to add a helper method for setting the replicaof so that we check for loops.
Deep dive into why the invalid state is reached (cyclic replication state).
We have had multiple of these issues in the past, and I think we always tried to figure it out. Maybe we should use this chance to add a helper method for setting the replicaof so that we check for loops.
And if we detect a loop, do we crash?
And if we detect a loop, do we crash?
Maybe we debug assert crash (as in only crash during a test). For normal production, we unwind we maybe ignore it and wait for the other node to update us.
The scenario is slightly difficult to explain, I've tried my best to depict it (updated the main comment). @PingXie / @madolson have a look.
Great diagram! Thanks @hpatro. This helps a lot.
And if we detect a loop, do we crash?
Maybe we debug assert crash (as in only crash during a test). For normal production, we unwind we maybe ignore it and wait for the other node to update us.
debugAssert is reasonable but I don't think we should crash the server just because there is a loop. In fact, we have logic to break the loop already. I will suggest a fix in #609
@hpatro Sorry for taking so long to circle back on this, the DCO was failing last time and I forgot to ping you to update. I think this is good to merge otherwise.
@madolson Had to force push. PTAL.
https://github.com/valkey-io/valkey/actions/runs/9768739963
We actually hit the replication cycle assert rather consistently in the test run @madolson shared above. This is something that I haven't seen before.
*** Crash report found in valkey_2/log.txt ***
=== VALKEY BUG REPORT START: Cut & paste starting from here ===
44713:M 02 Jul 2024 22:52:22.118 # === ASSERTION FAILED ===
44713:M 02 Jul 2024 22:52:22.118 # ==> cluster_legacy.c:5879 'primary->replicaof == ((void *)0)' is not true
We actually hit the replication cycle assert rather consistently in the test run @madolson shared above. This is something that I haven't seen before.
*** Crash report found in valkey_2/log.txt *** === VALKEY BUG REPORT START: Cut & paste starting from here === 44713:M 02 Jul 2024 22:52:22.118 # === ASSERTION FAILED === 44713:M 02 Jul 2024 22:52:22.118 # ==> cluster_legacy.c:5879 'primary->replicaof == ((void *)0)' is not true
Yeah, this change invokes the API more frequently. Someone needs to deep dive further to understand how we reach this state.
Yeah, this change invokes the API more frequently. Someone needs to deep dive further to understand how we reach this state.
I deep dived it with an AWS engineer last week, I have a partial fix and will post it early next week.
I took a look too and realized it’s a regression introduced by my slot migration PR #445. This change started allowing a replica to report its primary’s slot states and trigger clusterUdpateSlotsConfigWith.
PR #445 - Slot Migration Changes.
Here's what I think happens in these test failures involving a 3-node shard:
[T1] - Node A, B, and C are in the same shard with A as the primary.
[T2] - Node A loses its primaryship to B via a graceful/manual failover.
[T3] - After winning the election, B broadcasts the news to every node in the cluster, including C.
[T4] - C receives B's latest PING message and correctly registers B as its new primary.
[T5] - C then sends a new PING message to A, claiming B is its primary with all the slots.
[T6] - A still hasn't received B's broadcast message from [T3], and C's PING message from [T4] arrives at A.
And this is where things go wrong—a replicaof cycle is created.
At this point, A still thinks it’s the primary of the shard, and B -> replicaof == A. Since C is still a replica (as before), the role change handling logic doesn’t apply. So, A enters clusterUdpateSlotsConfigWith using C’s slot information (which is up to date with B’s). More importantly, B is passed in as the sender while at the same time A assumes B -> replicaof == A. The slot ownership update logic correctly gives the ownership of the slots to B. Now because A loses all its slots to B, who is in the same shard with a higher config epoch, this demotes A to a replica of the winner, B. And now with this PR, we set A -> replicaof = B, completing the replicaof cycle.
This still fails after merging #754 due to primary-replica cycle. Still needs deep dive.
Interesting. @madolson can you share your findings when you get a chance? I assume it is different from #754?
I think we should consider if this PR is still needed if/when we reduce the delay (see: https://github.com/valkey-io/valkey/pull/778) - This was a great PR and moved mountains in terms of figuring out what was wrong, but it would be great to reduce the delay entirely
Instead of mitigating the effects of shard ID not being stabilized, we can instead connect the needed flags to the node immediately during the handshake, thus avoiding this situation entirely. This approach will also have the benefit of increasing the speed of stabilization, as there will be less "hops" needed to reach a shard ID consensus.
Interesting. @madolson can you share your findings when you get a chance? I assume it is different from #754?
I have a theory about how this could happen.
- We had a stale
PONGmessage issue, which was fixed in commit https://github.com/valkey-io/valkey/commit/28976a9003c6dd5cdd7225c5bc90743b4fcde13c https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3271 - However we didn't bail after detecting this stale message. We proceed to https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311
- And then update
sender'sreplicaofbased on the stale message at: https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317
Now, imagine the following scenario
[T0] Three nodes: primary A with replica B, and an observer node N
[T1] B's PONG message to N claiming A is its primary gets stuck somewhere on the way to N
[T2] B becomes primary after a manual failover and then notifies A (and N but that message will get stuck behind the PONG message at T1)
[T3] A becomes a replica of B
[T4] A, now a replica of B, sends PING to N, which goes through the following steps that end up "promote" B to a primary, indirectly
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3257
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3267
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3269
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3281
and sets
A'sreplicaoftoB - https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317
[
T5] Finally,B'sPONGmessage toNfrom [T1] arrives onNand it goes through the following steps - https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3257
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3264
Due to step 4,
Bgot promoted to primary, implicitly - https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3267 However the epoch is stale, which is correctly handled
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3271
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3273 We don't bail but instead continue to
- https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3311
and finally updates
B->replicaoftoA, completing the loop - https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3317
I have seen stale messages in the past and I also notice that the latest failure in the codecov run, which could alter the timing quite a bit so I think this theory is very plausible.
The fix would be to bail immediately after detecting the stale message
https://github.com/valkey-io/valkey/blob/2b76c8fbe2ccadaee2149e4b9b7c7df7ff0d07b6/src/cluster_legacy.c#L3273
BTW, we have another undetected stale message issue (#798)
I think we should consider if this PR is still needed if/when we reduce the delay (see: #778) - This was a great PR and moved mountains in terms of figuring out what was wrong, but it would be great to reduce the delay entirely
Instead of mitigating the effects of shard ID not being stabilized, we can instead connect the needed flags to the node immediately during the handshake, thus avoiding this situation entirely. This approach will also have the benefit of increasing the speed of stabilization, as there will be less "hops" needed to reach a shard ID consensus.
Yeah I think we will need both. Let me pick up my slack next ... :(
The tests still fail for replicaof loops. I think we need a fix for #1015 first.
With the stale gossip message ignore change merged https://github.com/valkey-io/valkey/pull/1777, this looks safe to get in.
@PingXie / @enjoy-binbin Any further comments or shall we merge it ?
Finally! Go for it 😄