kvrocks
kvrocks copied to clipboard
Add streams
Available commands:
- XADD
- XDEL
- XINFO STREAM
- XLEN
- XRANGE
- XREAD
- XREVRANGE
- XTRIM
Design notes
Link to the Redis data type description: https://redis.io/docs/manual/data-types/streams/
A stream is a sequence of entries.
Each entry has a unique ID in the form "1-2" where two 64-bit numbers are divided by a hyphen.
By default, the first number is set to a millisecond timestamp, and the second one is a so-called sequence number - for cases when more than one entry was added at the same millisecond.
The value of an entry is a set of key-value pairs.
In case of the command:
XADD s1 1-0 key1 val1 key2 val2
the ID is 1-0 and the value is key1 val1 key2 val2.
In RocksDB entries are represented as key-value pairs where the key is formed as:
key | version | entry-ID-milliseconds-value | entry-ID-sequence-number-value
and the value is encoded as:
key1-length(fixed-32) | key1 | val1-length(fixed-32) | val1 | key2-length(fixed-32) | key2 | val2-length(fixed-32) | val2
.
Thanks to the structure of a key, all entries in a stream are sorted in chronological order.
As for value decoding: this is the first idea that came to my mind and maybe it's not very efficient because has an overhead (4 bytes on every argument).
Why did I introduce such a weird encoding scheme? Because if you are reading entries, Redis responds not with a single string:
1) 1) "s1"
2) 1) 1) "1-0"
2) 1) "key1"
2) "val1"
3) "key2"
4) "val2"
Perhaps, command args can be joined with a ' '(space) into a single string and this string should be saved in RocksDB? After reading, it will be split while constructing the reponse. With this encoding scheme, I was thinking about the possible spaces inside arguments and how to deal with them?
Differences from Redis
-
XTRIM
andXADD with trim possibility
: nearly exact trimming (via~
) is not possible due to implementation details (no radix tree here). However,LIMIT
option is working while in Redis it is allowed only in combination with~
.LIMIT
can be disallowed to be consistent with Redis protocol. I didn't do that because I want to hear opinions from kvrocks maintainers.
Replication is not implemented yet. Basically, I didn't test streams in a cluster configuration. Perhaps, the plain XREAD
on a replica will work, but blocking XREAD
that unblocks after XADD
on the master - I'm sure that some code should be written. It would be greatly appreciated if maintainers provide me with some hints about how to implement this.
Consumer groups are not implemented. I'm thinking about the possible implementation.
Right now I'm looking for any maintainers' feedback from the adding-new-data-type perspective (maybe, I didn't add a new column family to some filter/checker/extractor, etc.) and information about proper replicating a stream from master to other nodes.
This closes #532
Can you add some design documents so that people can understand your design ideas :)
Thanks to @torwig great contribution.
For the replication part, Kvrocks will replicate the data after writing into db, so we needn't to do anything if it has no special case. For the new column family, I prefer to keeping less column family and feel free to add if we MUST.
@caipengbo Yes, sure, I will :)
@git-hulk I mean that if I run
XREAD BLOCK 0 STREAMS s1 $
on a replica (it will block)
and then execute
XADD s1 * k1 v1
on master
the XREAD
will be unblocked and return the added entry.
It's very similar to publish-subscribe - there is a fan-out.
Does it require some additional code in processing replicated data?
@git-hulk I mean that if I run
XREAD BLOCK 0 STREAMS s1 $
on a replica (it will block) and then executeXADD s1 * k1 v1
on master theXREAD
will be unblocked and return the added entry. It's very similar to publish-subscribe - there is a fan-out. Does it require some additional code in processing replicated data?
Yes, you are right. Kvrocks default replication only sends the data to replicas, we need to recognize those special write batch and wake up the waiting clients.
@caipengbo Added design notes.
@caipengbo Added design notes.
Many thanks to @torwig detail explanation and bring this good feature into Kvrocks community, we will take a look recently.
I took a look briefly and overall design is good to me. There are two small issues in design internal:
- I think we should save the entries number as well in the encoded value instead of depending on '\0', we should allow using '\0' in the string.
- We need to create the column family for stream in storage::open if wants to use the new column family, or it will fallback to the default column family.
I took a look briefly and overall design is good to me. There are two small issues in design internal:
- I think we should save the entries number as well in the encoded value instead of depending on '\0', we should allow using '\0' in the string.
- We need to create the column family for stream in storage::open if wants to use the new column family, or it will fallback to the default column family.
Thank you for your review. I'll fix the issues.
Just one question for now: what do you mean by depending on '\0'? Currently, the encoded value consists of:
first-string-len | first-string | second-string-len | second string
as implemented here: https://github.com/apache/incubator-kvrocks/pull/745/files#diff-eee0baf52c379afb7e25739db14ec62c603d3c8f4c7f46af24e019319b2bb4beR135
I took a look briefly and overall design is good to me. There are two small issues in design internal:
- I think we should save the entries number as well in the encoded value instead of depending on '\0', we should allow using '\0' in the string.
- We need to create the column family for stream in storage::open if wants to use the new column family, or it will fallback to the default column family.
Thank you for your review. I'll fix the issues. Just one question for now: what do you mean by depending on '\0'? Currently, the encoded value consists of:
first-string-len | first-string | second-string-len | second string
as implemented here: https://github.com/apache/incubator-kvrocks/pull/745/files#diff-eee0baf52c379afb7e25739db14ec62c603d3c8f4c7f46af24e019319b2bb4beR135
Yes, I mean we can also save the number of encoded value. As this example, the encoded value can be 2(4bytes)|first-string-len | first-string | second-string-len | second string
which we can count the number of strings directly instead of depending of end of string. How do you think about this case?
@git-hulk Fixed possible crashes via access to command non-existing command arguments by index. Created column family for streams. I will use it also for replication purposes, to unblock blocked readers.
I took a look briefly and overall design is good to me. There are two small issues in design internal:
- I think we should save the entries number as well in the encoded value instead of depending on '\0', we should allow using '\0' in the string.
- We need to create the column family for stream in storage::open if wants to use the new column family, or it will fallback to the default column family.
Thank you for your review. I'll fix the issues. Just one question for now: what do you mean by depending on '\0'? Currently, the encoded value consists of:
first-string-len | first-string | second-string-len | second string
as implemented here: https://github.com/apache/incubator-kvrocks/pull/745/files#diff-eee0baf52c379afb7e25739db14ec62c603d3c8f4c7f46af24e019319b2bb4beR135Yes, I mean we can also save the number of encoded value. As this example, the encoded value can be
2(4bytes)|first-string-len | first-string | second-string-len | second string
which we can count the number of strings directly instead of depending of end of string. How do you think about this case?
I've got your idea about writing the number of strings to the encoded value. However, I didn't understand how this information can help us. I mean, if we know that there are 2 strings encoded, how this number can work in the decoding process? Could you please provide me with any example?
Currently, we read length len1
, the read exactly len1
bytes as a string, then read len2
, then read exactly len2
bytes as a string.
I took a look briefly and overall design is good to me. There are two small issues in design internal:
- I think we should save the entries number as well in the encoded value instead of depending on '\0', we should allow using '\0' in the string.
- We need to create the column family for stream in storage::open if wants to use the new column family, or it will fallback to the default column family.
Thank you for your review. I'll fix the issues. Just one question for now: what do you mean by depending on '\0'? Currently, the encoded value consists of:
first-string-len | first-string | second-string-len | second string
as implemented here: https://github.com/apache/incubator-kvrocks/pull/745/files#diff-eee0baf52c379afb7e25739db14ec62c603d3c8f4c7f46af24e019319b2bb4beR135Yes, I mean we can also save the number of encoded value. As this example, the encoded value can be
2(4bytes)|first-string-len | first-string | second-string-len | second string
which we can count the number of strings directly instead of depending of end of string. How do you think about this case?I've got your idea about writing the number of strings to the encoded value. However, I didn't understand how this information can help us. I mean, if we know that there are 2 strings encoded, how this number can work in the decoding process? Could you please provide me with any example? Currently, we read length
len1
, the read exactlylen1
bytes as a string, then readlen2
, then read exactlylen2
bytes as a string.
I rethink about this point, current implementation is ok since C++ string didn't depend on '\0' at all, so use the string length to determine where's value end is ok. Sorry that I misunderstand the implementation. Just disregard it.
@git-hulk Added batch handling on replicas to unblock clients that were blocked by the XREAD command by the respective XADD.
@git-hulk Added batch handling on replicas to unblock clients that were blocked by the XREAD command by the respective XADD.
Thanks for your great contribution, I will try and review the PR in a few days.
@git-hulk I've removed the LIMIT option from trim options to be compliant with Redis protocol.
By default, Redis uses exact trimming (or if =
is specified). And in this case, the LIMIT option can't be used.
The LIMIT option can be used only with nearly exact trimming (if ~
is specified instead of =
), but nearly exact trimming is bound to Redis implementation of a stream (there are nodes with elements), so it can't be applied with RocksDB.
Also, I will resolve some conflicts with the unstable
branch in a few minutes.
@git-hulk I've removed the LIMIT option from trim options to be compliant with Redis protocol. By default, Redis uses exact trimming (or if
=
is specified). And in this case, the LIMIT option can't be used. The LIMIT option can be used only with nearly exact trimming (if~
is specified instead of=
), but nearly exact trimming is bound to Redis implementation of a stream (there are nodes with elements), so it can't be applied with RocksDB.Also, I will resolve some conflicts with the
unstable
branch in a few minutes.
Thanks @torwig, it makes sense to me since the underlay structures are different. I think we can have basic stream functions, then try to resolve the nearly extract trimming issue only if many community users requested it, or just left it behind.
I have reviewed this PR, implementation, test coverage and design are good to me. I will approve this PR after above conversations were resolved.
PTAL @ShooterIT @Alfejik @caipengbo @PragmaTwice, I think we can merge this PR after it got >= 3 approves.
Sorry, there are some small things recently. I will carry out CR within this week :)
redis_cmd.cc file was already long because it contained the command implementation.
Now we have added a new Stream
type, which is much longer. Maybe we'll add other new types in the future. They will definitely have a lot of commands too, which will make the file huge.
Should we consider splitting this file, separating the various types of commands into different files? HDYT @git-hulk @PragmaTwice
LGTM
redis_cmd.cc file was already long because it contained the command implementation.
Now we have added a new
Stream
type, which is much longer. Maybe we'll add other new types in the future. They will definitely have a lot of commands too, which will make the file huge.Should we consider splitting this file, separating the various types of commands into different files? HDYT @git-hulk @PragmaTwice
It sounds reasonable.
redis_cmd.cc file was already long because it contained the command implementation. Now we have added a new
Stream
type, which is much longer. Maybe we'll add other new types in the future. They will definitely have a lot of commands too, which will make the file huge. Should we consider splitting this file, separating the various types of commands into different files? HDYT @git-hulk @PragmaTwiceIt sounds reasonable.
@caipengbo Yes, agree that we should separate commands into each data structure after this PR. I'll submit a new issue to track this improvement.
@torwig Can you help to resolve the conflict?
@git-hulk Done
Thanks @torwig
We can merge this PR after one of @Alfejik @ShooterIT approved since the new commit only resolved command number conflict, so I think the previous approves are still valid.
Of course, guys can approve it again if you're free. cc @caipengbo @PragmaTwice
I don't have time to go through this patch. Although, it seems this patch looks good to two reviewers. If we accept such a new feature, remember to track updates on our doc site :)
I don't have time to go through this patch. Although, it seems this patch looks good to two reviewers. If we accept such a new feature, remember to track updates on our doc site :)
Thanks to @tison warm remind. Will merge this PR if have no further feedback tomorrow, then update the doc site after merging.
Hello, thanks to @torwig brings this awesome feature for Kvrocks community and everyone who reviewed this PR. I will summary and merge this PR and welcome to create a new thread to further discussion. Cheers!!!
Thanks @torwig and @aleksraiden again.