feat(hash): Support hash field expiration
This pr want to close #2269
-
Encoding To implement hash expiration by field level, we encode the expiration time into the first 8 bytes of value. For example:
+-------------------------+ key|version|field => | expire (8 byte) | value | +-------------------------+And we use the second bit of the flags in metadata to indicate whether encode the expiration into value. If all the fields in the hash have no encode expiration time, the
flagswill be1 0 0 0 | 0 0 1 0. If any field in the hash is encoded into the expiration time, theflagswill be1 1 0 0 | 0 0 1 0. -
Flags conversion For the HEXPIRE command, it will set an expiration on one or more fields. If we execute hexpire command on a hash key that has not been set to expire, this will set the secode bit of
flagsand encode expiration for all fields (fields that not appear in the command will be set to 0) -
Add commands According to https://redis.io/docs/latest/commands add a series of related commands:
HEXPIRE,HPEXPIRE,HEXPIREAT,HPEXPIREATHEXPIRETIME,HPEXPIRETIME,HTTL,HPTTLHPERSIST
And we use the second bit of the flags in metadata to indicate whether encode the expiration into value.
As we specified in https://kvrocks.apache.org/community/data-structure-on-rocksdb#encoding-version-and-data-type, the first and second (and maybe more bits from the MSB) of flags is used to represent the "encoding version". flags 1 1 0 0 | 0 0 1 0 indicates that the encoding version is 3.
So it should not be used for field expiration. (Also, not all data types need the field expiration feature.)
Instead, you can add a new enum value of data type, or add a new field in the metadata of hash.
@jjz921024 Thanks for your contributions. As @PragmaTwice mentioned, we cannot put the expired flag in the version field and we also need to adopt the SubkeyFilter in compact_filter.cc to recycle expired subkeys.
we also need to adopt the SubkeyFilter in compact_filter.cc to recycle expired subkeys.
@git-hulk Thanks for you reminding. I have added this feature in latest commit.
About encoding, my previous thought was to use each bit for a different meaning, rather than as an incremental version.
For example:
- For all type, the first bit indicate use 64bit or 32bit encoding
- For hash type, the second bit of the
flagsto indicate whether encode the expiration into value (not affect for other types)
I see this issues #2292. Maybe we should finish it first?
For all type, the first bit indicate use 64bit or 32bit encoding For hash type, the second bit of the flags to indicate whether encode the expiration into value (not affect for other types)
No. It's too ad hoc and will introduce additional maintanance effort. (the encoding of HASH will be inconsistent with other types, which brings huge understanding, maintanance cost and additional code logic)
And also, it breaks our whole design of "encoding version", e.g. HASH cannot have encoding version larger than 1.
You can consider these options, as I mentioned before.
Instead, you can add a new enum value of data type, or add a new field in the metadata of hash.
-1 for this change if no further commits.
@PragmaTwice Got it, i will change it.
@ jjz921024 Please let me know if I'm wrong. If the data structure contains at least 1 key that can be expired, we set a special bit to true and later check it.
@ jjz921024 Please let me know if I'm wrong. If the data structure contains at least 1 key that can be expired, we set a special bit to true and later check it.
Yes. Due to the expiration time is encoded in the field, the time complexity of some commands changes from O(1) to O(n), such as type and hlen. If we can check this flag before execution, then the some commands complexity will remains the same when
this feature is not used
For all type, the first bit indicate use 64bit or 32bit encoding For hash type, the second bit of the flags to indicate whether encode the expiration into value (not affect for other types)
No. It's too ad hoc and will introduce additional maintanance effort. (the encoding of HASH will be inconsistent with other types, which brings huge understanding, maintanance cost and additional code logic)
And also, it breaks our whole design of "encoding version", e.g. HASH cannot have encoding version larger than 1.
You can consider these options, as I mentioned before.
Instead, you can add a new enum value of data type, or add a new field in the metadata of hash.
-1 for this change if no further commits.
@PragmaTwice Hi, I fix it about encoding version. and add a new field in the metadata of hash to indicate whether the expiration time of field is encoded. please review. thank
@git-hulk Hi, I had renamed the decodeFieldAndTTL to decodeExpireFromValue, encodeFieldAndTTL to encodeExpireToValue. And rewrite the IsFieldExpired() to make it clearer. please review.
Modification:
- In
hlencommand, count the number of fields at hash expiration enable, replace the previous method to avoid loading all of fields into memory. - For most
Generictype commands (eg: ttl, type, copy, expire, scan...). If the type is hash, we still need to check each fields. If all of fields was expired, the GetMetadata should return not found. - In
delcommand, if delete a hash object that all of fields expired, so this hash object should be treated as empty and should not affect the deleted_cnt.
As I mentioned before, we can add a config option (e.g.
hash-field-expiration yes/no) for users to decide if they want to enable field expiration.Also our testing can benefit from this option since we can cover the old hash encoding.
@PragmaTwice Thank for you tip, I have finished this feature. please review.
@git-hulk @PragmaTwice @torwig Sorry to bother your. I have changed the code as review required. May I ask your review it again? I think this is a very useful feature and hope it can be merge to kvrocks. If there are any changes, please let me know. Thank.
I had fixed the gofmt error and passed all CI in my fork repo.
https://github.com/jjz921024/kvrocks/actions/runs/10337719500
Seems we need to solve the conflict first?
Seems we need to solve the conflict first?
rebase this pr on the latest commit of unstable branch.
Seems there are failure in CI. I will check it up
Quality Gate passed
Issues
40 New issues
0 Accepted issues
Measures
0 Security Hotspots
60.9% Coverage on New Code
4.8% Duplication on New Code
@jjz921024 I will take another pass in a few days, thanks for your contribution.
@jjz921024 @git-hulk @PragmaTwice I'm looking forward to this feature, is there anything I can help with to help here?
@jjz921024 @git-hulk @PragmaTwice I'm looking forward to this feature, is there anything I can help with to help here?
Thank you for your input. I'll take a look at this weekend.
rebase this branch on the HEAD
Awesome work, @jjz921024
Maybe we need some benchmark - before include this feature and after? Тo test the impact of new logic on basic HSET's command (and prevent or minimize it)?
@git-hulk @ltagliamonte Hi, I don't know how to commit code on the new pr #2717 , so I just keep working here.
For the first point you mentioned on https://github.com/apache/kvrocks/pull/2717#issuecomment-2585068688, I had finish. please review.
And the second point, do you mean that the ExistValidField() function should contain the following code? This code is repeated many times. I think the ExistValidField() is a dedicated function to check if it has any valid fields.
HashMetadata hash_metadata(false);
s = hash_metadata.Decode(rocksdb::Slice(pin_values[i].data(), pin_values[i].size()));
if (!s.ok()) continue;
redis::Hash hash_db(storage_, namespace_);
Awesome work, @jjz921024
Maybe we need some benchmark - before include this feature and after? Тo test the impact of new logic on basic HSET's command (and prevent or minimize it)?
@aleksraiden Hi, After optimization of https://github.com/apache/kvrocks/pull/2402/commits/473db0c9ec5cf9f38379eda0d11f01e61ba5f90a, the expire fields should have no effect on HSET command. Should we add some bench test for read command of hash type (like hgetall)?
Awesome work, @jjz921024 Maybe we need some benchmark - before include this feature and after? Тo test the impact of new logic on basic HSET's command (and prevent or minimize it)?
@aleksraiden Hi, After optimization of 473db0c, the expire fields should have no effect on HSET command. Should we add some bench test for read command of hash type (like hgetall)?
I have a test by standard redis-bench yesterday (before your commit) - and at quick look is there NO any impact to performance. But standard test looks only to GET/HSET as I see.
Great job!
@git-hulk @ltagliamonte Hi, I don't know how to commit code on the new pr https://github.com/apache/kvrocks/pull/2717 , so I just keep working here.
@jjz921024 Perhaps you can suggest your change in the new PR due to this PR is overload for discussing. What do you think?
I have a test by standard redis-bench yesterday (before your commit) - and at quick look is there NO any impact to performance. But standard test looks only to GET/HSET as I see.
@aleksraiden Great!
Perhaps you can suggest your change in the new PR due to this PR is overload for discussing. What do you think?
I'm pleasure to do it and I will try my best to finish this feature if i can.
I'm pleasure to do it and I will try my best to finish this feature if i can.
@jjz921024 Thank you! We will take care of PR #2717 to keep the original commit information while merging.