valkey
valkey copied to clipboard
Propagate sharded pubsub data via replication link
Issue: https://github.com/redis/redis/issues/12196 Ref: https://github.com/redis/redis/pull/12929
Sharded pubsub uses the cluster link for message propagation across the nodes. Cluster link has a high payload overhead and is particularly expensive if the message to be propagated is comparatively small.
As the sharded pubsub message needs to be propagated only within a shard, if the message from client is received on a primary, replication link can be used for propagation as compared to cluster link. There are two benefits we can yield from it.
- Throughput will be higher compared to the initial implementation.
- Message delivery guarantee will better as the message will be accumulated in client output buffer in case of disconnectio n of replication link and will be retried.
This will be inline with how message propagation is performed in cluster disabled mode.
Notes:
- It's a breaking change,
SPUBLISH
is marked as write command and can only be sent on a primary. The command will fail and client will receive aMOVED
response if published on a replica. - Sharded pubsub logic to handle message via cluster bus propogation need to exist to avoid issue(s) during upgrade(s).
- Tested mix nodes in shard (Setup 1: unstable - primary, shardpubsub-replication-link - replica and Setup 2: shardpubsub-replication-link - primary, unstable - replica) subscribers were able to receive messages on primary/replica with each setup.
Benchmark:
Setup:
- 1 Primary (6379) + 1 Replica (6380)
- No client subscription
Scenario 1: Message(s) published on primary:
Request:
src/redis-benchmark -h 127.0.0.1 -l -p 6379 -n 10000000 -P 20 SPUBLISH hello world
Summary:
throughput summary: 798148.25 requests per second
latency summary (msec):
avg min p50 p95 p99 max
1.184 0.264 1.199 1.391 1.455 12.159
Scenario 2: Message(s) published via cluster link (prior to this change):
Request:
src/redis-benchmark -h 127.0.0.1 -l -p 6379 -n 10000000 -P 20 SPUBLISH hello world
Summary:
throughput summary: 556916.94 requests per second
latency summary (msec):
avg min p50 p95 p99 max
1.732 0.528 1.487 1.663 1.727 153.471
Gain: 43% on using replication link over cluster link.
https://github.com/valkey-io/valkey/issues/76 would have been helpful here as well.
There is another important point here, which is if we moved to cluster V2 we would no longer need to maintain cluster links for pubsub.
Some context, this was approved by the previous Redis core team, but was never finished because of the license change.
@valkey-io/core-team Messaging folks here for approval. Please 👍 or 👎 with your opinion if you have one.
We'd still need to support cluster bus for unsharded pubsub right?
I fail to see the difficulty in sending data like this on a cluster v2 bus. Just maybe multiple jumps in worst case. (We could even look at real message brokers like rabbitmq for how they do this.)
Let's open up some issue about cluster v2, what we need from it and discuss different approaches.
LGTM, btw, why didn’t we choose replication stream for shard pub/sub at the beginning?
I think compatibility with the way the cluster worked was the main reasoning. Since you could send a message to a replica and read it on the primary. I don't recall why that seemed important at the time though, and all usage that I have seen of it hasn't cared about that property.
We'd still need to support cluster bus for unsharded pubsub right?
For cluster bus, we will probably need to continue supporting it. Cluster v2 will be an opt-in feature, and I think we should consider dropping support for it.
Let's open up some issue about cluster v2, what we need from it and discuss different approaches.
This is true, we could still do it. My observation about usage is that it's not really used and when it is used it's caused a lot of problems.
Cluster v2 will be an opt-in feature
This is one of the major points I don't like. I think the cluster shall do this version negotiation by itself. Two parallel solutions is very bad, compared to gradually transforming the current solution.
Btw, I don't see any problem to propagate or broadcast any data (like pubsub) in a future V2 cluster, although it may require multiple hops to reach all nodes, if not all nodes are connected to all other nodes.
I'm ok with it, and it is a breaking change.
LGTM, btw, why didn’t we choose replication stream for shard pub/sub at the beginning?
- Didn't realize the cluster message overhead to be this high, until a user highlighted in https://github.com/redis/redis/issues/12196
- Message publishing on any node (primary/replica) seemed like a good feature to have (similar to global pubsub in cluster-enabled mode).
We'd still need to support cluster bus for unsharded pubsub right?
Alternative solution, we could maybe alias PUBLISH/SUBSCRIBE/UNSUBSCRIBE
to SPUBLISH/SSUBSCRIBE/SUNSUBSCRIBE
in cluster mode to keep supporting the functionality but not use cluster links.
Alternative solution, we could maybe alias
PUBLISH/SUBSCRIBE/UNSUBSCRIBE
toSPUBLISH/SSUBSCRIBE/SUNSUBSCRIBE
in cluster mode to keep supporting the functionality but not use cluster links.
If we start returning MOVED for these, it's a breaking change in itself. Also consider a cluster with mixed version of nodes.
I think it would be fairly easy to introduce a new message in the "legacy" cluster bus to send pubsub messages without the full Ping packet (slot bitmap, etc.). We just need a flag bit in MEET to indicate that a node supports this feature. It's low hanging fruit for better cluster performance IMO.
I think it would be fairly easy to introduce a new message in the "legacy" cluster bus to send pubsub messages without the full Ping packet (slot bitmap, etc.). We just need a flag bit in MEET to indicate that a node supports this feature. It's low hanging fruit for better cluster performance IMO.
I'm going to argue that although is possible, it doesn't seem like the right long term solution. Replication still seems like the better fit for pubsub compared to the clusterbus.
@madolson I agree replication is better, but the context of that (off topic) idea was that we still need to support unsharded pubsub.
I'm going to argue that although is possible, it doesn't seem like the right long term solution. Replication still seems like the better fit for pubsub compared to the clusterbus.
@madolson, I assume the long-term solution here is cluster V2? If so, agreed that using replication would be fully compatible with cluster v2. that said, I think a lighter version of clusterMsg
as @zuiderkwast suggested is a good thing regardless. It works for both sharded and unsharded pubsub, which we would have to continue to support. It does mean though sharded pubsub on cluster V2 would require a different solution when it could've reused the same implementation.
@soloestoy, is your call out of "breaking change" referring to the mixed cluster case? I think it would apply to both replication and the lighter clusterMsg ideas. I wonder if we could introduce a mutable server config to control this new behavior such that an admin can enable it only after they know for sure every node is running the latest version.
I wonder if we could introduce a mutable server config to control this new behavior such that an admin can enable it only after they know for sure every node is running the latest version.
Let's avoid a config if we can. What's wrong with a capability bit in the clusterMsg? I don't get why people don't seem to like it. Otherwise, a REPLCONF flag or version field could solve this.
@soloestoy, is your call out of "breaking change" referring to the mixed cluster case? I think it would apply to both replication and the lighter clusterMsg ideas.
The breaking change I mean is the SPUBLISH command is marked as write command.
@soloestoy, is your call out of "breaking change" referring to the mixed cluster case? I think it would apply to both replication and the lighter clusterMsg ideas.
The breaking change I mean is the SPUBLISH command is marked as write command.
With write command association we get the redirection logic to primary builtin. My initial change was manually handling the redirection to primary, which were a few conditional statement (did the job) and won't be a breaking change. Any preferences @valkey-io/core-team? I would lean towards a non breaking change as Pub/Sub mechanism shouldn't be restricted due to cluster coverage issue.
I would lean towards a non breaking change as Pub/Sub mechanism shouldn't be restricted due to cluster coverage issue.
The concern I had with this is then we aren't notifying clients about the redirection. Clients like the go client would look up the command, see that it's not marked as write, and then send commands to replicas before getting MOVED. We should be able to mark a client as write
without including it in the write category. That is why I was suggesting a special write category. We can use the same flag to also omit from cluster convergence.
We should be able to mark a client as
write
without including it in the write category.
We're talking abort command flags implying ACL categories, right? I filed this a while ago: #417
After reading #525 , I changed my mind. Since to support multi publish, we need to introduce a new cluster message type, and the new message type should be light (without route table). So it's better to use the new message instead of replication link for sharded publish I think. Then we would not break the behavior from user side, maybe a new message type in cluster bus is an internal breaking change, but this is introduced in a new major version. I think it's not a big deal, all nodes in a cluster should use the same major version, unless during version upgrading, and lose some messages during upgrading is acceptable.
Then we would not break the behavior from user side, maybe a new message type in cluster bus is an internal breaking change, but this is introduced in a new major version. I think it's not a big deal, all nodes in a cluster should use the same major version, unless during version upgrading, and lose some messages during upgrading is acceptable.
I don't think we should lose messages during upgrade, but I think doing an upgrade is pretty easy to orchestrate. We can send a bit if we support the multi-publish functionality, and if we don't receive the bit (or don't know their state), we can send it in the old format.
We're parking this PR for the time being while we evaluate if https://github.com/valkey-io/valkey/issues/557 will solve the scalability issues.
One thing which I wanted a decision on before closing this PR.
@madolson Do we want to disable SPUBLISH
on replica with 8.0 irrespective of the underlying data propagation layer? From our initial discussion, it would have aligned well with the planned cluster design improvements.
One thing which I wanted a decision on before closing this PR.
For now I think it's an unnecessary breaking change. We can fix it with cluster v2 if we really have to.
Generic solution has been implemented with light message header. It will be used for Pub/Sub data starting with next major version.
https://github.com/valkey-io/valkey/pull/654/