kvrocks
kvrocks copied to clipboard
Add support of the Hyperloglog data structure
storage format description: https://github.com/apache/kvrocks-website/pull/207/files.
@tutububug Thank you for your contribution. Running ./x.py format should eliminate the linting issues.
Thank you for your contribution! Maybe you need to add a command parser to use the data structure with actual command like redis.
Thank you for your contribution!
Could you include your design in the PR description? For example, explain how to encode the metadata and HLL data (subkeys), similar to what is shown on https://kvrocks.apache.org/community/data-structure-on-rocksdb.
Thank you for your contribution! Maybe you need to add a command parser to use the data structure with actual command like redis.
Yes, I will give the commit later.
Thank you for your contribution!
Could you include your design in the PR description? For example, explain how to encode the metadata and HLL data (subkeys), similar to what is shown on https://kvrocks.apache.org/community/data-structure-on-rocksdb.
OK.
@PragmaTwice I create a PR(https://github.com/apache/kvrocks-website/pull/207) for describe hyperloglog storage format.
@PragmaTwice I create a PR(apache/kvrocks-website#207) for describe hyperloglog storage format.
Thank you! Currently, it is only necessary to include your design in the PR description, not on the website as the design is not finalized yet.
Regarding your design, I have some questions:
- Based on the current design, typically one Redis key will introduce a maximum of 16384 RocksDB keys (registers). Each value corresponding to a RocksDB key contains only one integer. This may be inefficient; merging multiple registers onto one key could reduce the number of keys introduced. WDYT?
- I noticed that integers are stored as string representations (
std::to_string) rather than their binary form (e.g., subkey and register value). What is the reason for this approach? - Having a constant
sizeseems illogical since the number of subkeys linked to this Redis key varies. (However, if every write operation modifies the metadata, it may lead to a decrease in performance. I don't have a clear idea about this aspect.)
Concerning the code, although I haven't reviewed it thoroughly yet, there are some points worth mentioning:
- The complete source code of MurmurHash can be placed in the
vendordirectory. - It appears that using
PFADDin the code leads to an increasing number of RocksDB keys (registers). There seems to be no operation that reduces these keys until deleting this Redis key. How can we prevent an increase in RocksDB keys without a mechanism to decrease them?
cc @git-hulk @mapleFU
@tutububug As @PragmaTwice mentioned in https://github.com/apache/kvrocks/pull/2142#issuecomment-1986756740, it's unnecessary to use a static number of 16384 since it may heavily affect the read performance while using PFMERGE, I guess a smaller one like 16 is enough and every subkey has 1000 integers.
I suggest that we can store the number of registers in one rocksdb key in the metadata (for example, referred to as register_number_in_one_key), so that even if adjustments are made later for performance reasons, the compatibility of kvrocks data can be maintained.
Currently, a fixed value for register_number_in_one_key can be hardcoded in the code.
cc @tutububug
Based on the current design, typically one Redis key will introduce a maximum of 16384 RocksDB keys (registers). Each value corresponding to a RocksDB key contains only one integer. This may be inefficient; merging multiple registers onto one key could reduce the number of keys introduced. WDYT?
There're two parts of things.
- Bitmap type can be used to do the dense logic, which would reducing the keycount
- sparse hll type can be introduced to save some overhead here.
Based on the current design, typically one Redis key will introduce a maximum of 16384 RocksDB keys (registers). Each value corresponding to a RocksDB key contains only one integer. This may be inefficient; merging multiple registers onto one key could reduce the number of keys introduced. WDYT?
There're two parts of things.
- Bitmap type can be used to do the dense logic, which would reducing the keycount
- sparse hll type can be introduced to save some overhead here.
So I think the question is, should we also introduce two mode of hll encoding (sparse and dense layout) and an auto switching policy between these two layout?
So I think the question is, should we also introduce two mode of hll encoding (sparse and dense layout) and an auto switching policy between these two layout?
I prefer to do this :-) But we can regard it as a further optimization
Other Looks ok to me
Regarding your design, I have some questions:
1. Based on the current design, typically one Redis key will introduce a maximum of 16384 RocksDB keys (registers). Each value corresponding to a RocksDB key contains only one integer. This may be inefficient; merging multiple registers onto one key could reduce the number of keys introduced. WDYT?
Not really. The register(subkey) only be stored which its count is not zero. This point is different from the memory implementation with static array as dense encode. On disk, I think its sparse encode naturally.
2. I noticed that integers are stored as string representations (`std::to_string`) rather than their binary form (e.g., subkey and register value). What is the reason for this approach?
The number of consecutive 0s is calculated from the last 50 digits of the hash value, so the maximum value is 50, and the maximum value stored in a string is 2 bytes. It should not waste a lot of space, and at the same time save the overhead of integer encoding and decoding. For keys, it may be more efficient, but the largest index is only 5 bytes (16383).
3. Having a constant `size` seems illogical since the number of subkeys linked to this Redis key varies. (However, if every write operation modifies the metadata, it may lead to a decrease in performance. I don't have a clear idea about this aspect.)
Currently, the ‘size’ variable has no practical purpose; the only requirement is that it be non-zero. Due to the implementation of kvrocks getmetadata, non-string type data structures with a size of 0 are judged to be expired, and the HLL add parameter that redis has implemented allows no parameters but the key will be stored. For compatibility, size is used as a constant to prevent expiration.
Concerning the code, although I haven't reviewed it thoroughly yet, there are some points worth mentioning:
1. The complete source code of MurmurHash can be placed in the `vendor` directory.
OK.
2. It appears that using `PFADD` in the code leads to an increasing number of RocksDB keys (registers). There seems to be no operation that reduces these keys until deleting this Redis key. How can we prevent an increase in RocksDB keys without a mechanism to decrease them?
For an HLL user key, the maximum register value is 16384, and it cannot be larger. In fact, I think this should be considered controllable compared to data structures such as hash, set, list, etc. whose size is determined by user input.
@PragmaTwice cc @git-hulk @mapleFU
The number of consecutive 0s is calculated from the last 50 digits of the hash value, so the maximum value is 50, and the maximum value stored in a string is 2 bytes. It should not waste a lot of space, and at the same time save the overhead of integer encoding and decoding. For keys, it may be more efficient, but the largest index is only 5 bytes (16383).
For rocksdb value, it's should be 1-2 bytes payload, the value size is also included. So, it introduce an extremly huge overhead. So I prefer the impl of bitmap/bitfield.
Get 2^14 times in rocksdb would also be heavy, and might break some statistics in rocksdb, which making it tent to compaction more or caching the unneccessary blocks.
The number of consecutive 0s is calculated from the last 50 digits of the hash value, so the maximum value is 50, and the maximum value stored in a string is 2 bytes. It should not waste a lot of space, and at the same time save the overhead of integer encoding and decoding. For keys, it may be more efficient, but the largest index is only 5 bytes (16383).
Sorry but I cannot get your point. In every sense, using a string representation cannot be more efficient. In binary representation: if the maximum value is less than 2^8, you should use one byte; and if the maximum value is less than 2^16, you should use two bytes.
Not really. The register(subkey) only be stored which its count is not zero. This point is different from the memory implementation with static array as dense encode. On disk, I think its sparse encode naturally.
I don't think you answered my question. The maximum number of subkeys is still 2^16. I think multiple registers should be put into one subkey to reduce the number of keys and improve the efficiency of count.
@mapleFU @PragmaTwice Thank you for carefully code review. Since the code for the algorithm part comes from redis, some function arguments and comments need to be adjusted. Regarding optimizing key encoding and the number of subkeys, I will refer to your suggestions. cc @git-hulk
@tutububug Thanks for your great efforts.
@tutububug FYI:
- bitfield: https://github.com/apache/kvrocks/blob/unstable/src/common/bitfield_util.h#L33 . You can refer to these logic or rewrite the HLL logic using these yourself. A value in HLL using redis' impl would be about 13KB, we can storing it in one value. You can also refer to my bit utils: https://github.com/apache/kvrocks/blob/unstable/src/common/bit_util.h . It's not hard but maybe need some time to get familiar with it. Redis uses the similiar logic, but maybe not encapsulate them as tools.
- Coding style: the redis using macro and some raw-pointers, for maintaining, we'd better using C++ style code
- Protocol and metadata: currently, this patch only implements "dense" HLL. I'm ok with this. But maybe we can leave a field for "format" in metadata. For example
| ... | format-version (1Byte)|
And set "dense == 0" here.
Thank you for your patient and your code
A value in HLL using redis' impl would be about 13KB, we can storing it in one value.
Do you mean that we can store the whole HLL data in one rocksdb key?
IMHO I think it is worth exploring how many rocksdb key value pairs is suitable to be stored here.
Kvrocks does not have MVCC. Considering that write ops are very frequent for HLL, I think putting them into multiple keys may improve efficiency.
cc @mapleFU
@PragmaTwice We can first implement that, and optimize writing to HLL step-by-step. Since 2.9 release would be a long time from now.
@mapleFU @PragmaTwice The code refactoring is ready for review, and the storage format description is sorted out at https://github.com/apache/kvrocks-website/pull/207/files. Please take a look if have time. cc @git-hulk @torwig
Sure, @tutububug can help to fix the lint error while you're free: https://github.com/apache/kvrocks/actions/runs/8538011240/job/23657605010?pr=2142#step:7:1368
Sure, @tutububug can help to fix the lint error while you're free: https://github.com/apache/kvrocks/actions/runs/8538011240/job/23657605010?pr=2142#step:7:1368
Strange, there is no such warning in my branch(https://github.com/tutububug/incubator-kvrocks/pull/6). I triggered the action again to check.
@tutububug you can pull the unstable into this branch and have a try.
@tutububug you can pull the unstable into this branch and have a try. fixed
LGTM, one comment to check if we need to add a notice for the source code.
@mapleFU Could you review it again?
@mapleFU Can help to take a look while you get time. We can merge this PR if it looks good since it's been pending too long.
This looks much better than previous version. The remaining are mostly code-style problem
fixed.
