rewrite-maven-plugin
rewrite-maven-plugin copied to clipboard
chore(ci): bump org.rocksdb:rocksdbjni from 8.8.1 to 9.1.1
Bumps org.rocksdb:rocksdbjni from 8.8.1 to 9.1.1.
Release notes
Sourced from org.rocksdb:rocksdbjni's releases.
RocksDB 9.1.1
9.1.1 (2024-04-17)
Bug Fixes
- Fixed Java
SstFileMetaDatato prevent throwingjava.lang.NoSuchMethodError- Fixed a regression when
ColumnFamilyOptions::max_successive_merges > 0where the CPU overhead for deciding whether to merge could have increased unless the user had set the optionColumnFamilyOptions::strict_max_successive_mergesRocksDB 9.1.0
9.1.0 (2024-03-22)
New Features
- Added an option,
GetMergeOperandsOptions::continue_cb, to give users the ability to endGetMergeOperands()'s lookup process before all merge operands were found.- *Add sanity checks for ingesting external files that currently checks if the user key comparator used to create the file is compatible with the column family's user key comparator. *Support ingesting external files for column family that has user-defined timestamps in memtable only enabled.
- On file systems that support storage level data checksum and reconstruction, retry SST block reads for point lookups, scans, and flush and compaction if there's a checksum mismatch on the initial read.
- Some enhancements and fixes to experimental Temperature handling features, including new
default_write_temperatureCF option and opening anSstFileWriterwith a temperature.WriteBatchWithIndexnow supports wide-column point lookups via theGetEntityFromBatchAPI. See the API comments for more details.- *Implement experimental features: API
Iterator::GetProperty("rocksdb.iterator.write-time")to allow users to get data's approximate write unix time and write data with a specific write time viaWriteBatch::TimedPutAPI.Public API Changes
- Best-effort recovery (
best_efforts_recovery == true) may now be used together with atomic flush (atomic_flush == true). The all-or-nothing recovery guarantee for atomically flushed data will be upheld.- Remove deprecated option
bottommost_temperature, already replaced bylast_level_temperature- Added new PerfContext counters for block cache bytes read - block_cache_index_read_byte, block_cache_filter_read_byte, block_cache_compression_dict_read_byte, and block_cache_read_byte.
- Deprecate experimental Remote Compaction APIs - StartV2() and WaitForCompleteV2() and introduce Schedule() and Wait(). The new APIs essentially does the same thing as the old APIs. They allow taking externally generated unique id to wait for remote compaction to complete.
- *For API
WriteCommittedTransaction::GetForUpdate, if the column family enables user-defined timestamp, it was mandated that argumentdo_validatecannot be false, and UDT based validation has to be done with a user set read timestamp. It's updated to make the UDT based validation optional if user setsdo_validateto false and does not set a read timestamp. With this,GetForUpdateskips UDT based validation and it's users' responsibility to enforce the UDT invariant. SO DO NOT skip this UDT-based validation if users do not have ways to enforce the UDT invariant. Ways to enforce the invariant on the users side include manage a monotonically increasing timestamp, commit transactions in a single thread etc.- Defined a new PerfLevel
kEnableWaitto measure time spent by user threads blocked in RocksDB other than mutex, such as a write thread waiting to be added to a write group, a write thread delayed or stalled etc.RateLimiter's API no longer requires the burst size to be the refill size. Users ofNewGenericRateLimiter()can now provide burst size insingle_burst_bytes. Implementors ofRateLimiter::SetSingleBurstBytes()need to adapt their implementations to match the changed API doc.- Add
write_memtable_timeto the newly introduced PerfLevelkEnableWait.Behavior Changes
RateLimiters created byNewGenericRateLimiter()no longer modify the refill period whenSetSingleBurstBytes()is called.- Merge writes will only keep merge operand count within
ColumnFamilyOptions::max_successive_mergeswhen the key's merge operands are all found in memory, unlessstrict_max_successive_mergesis explicitly set.Bug Fixes
- Fixed
kBlockCacheTierreads to returnStatus::Incompletewhen I/O is needed to fetch a merge chain's base value from a blob file.- Fixed
kBlockCacheTierreads to returnStatus::Incompleteon table cache miss rather than incorrectly returning an empty value.- Fixed a data race in WalManager that may affect how frequent PurgeObsoleteWALFiles() runs.
- Re-enable the recycle_log_file_num option in DBOptions for kPointInTimeRecovery WAL recovery mode, which was previously disabled due to a bug in the recovery logic. This option is incompatible with WriteOptions::disableWAL. A Status::InvalidArgument() will be returned if disableWAL is specified.
Performance Improvements
- Java API
multiGet()variants now take advantage of the underlying batchedmultiGet()performance improvements. BeforeBenchmark (columnFamilyTestType) (keyCount) (keySize) (multiGetSize) (valueSize) Mode Cnt Score Error Units MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 6315.541 ± 8.106 ops/s MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 25 6975.468 ± 68.964 ops/sAfter
Benchmark (columnFamilyTestType) (keyCount) (keySize) (multiGetSize) (valueSize) Mode Cnt Score Error Units MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 7046.739 ± 13.299 ops/s MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 25 7654.521 ± 60.121 ops/s </tr></table>
... (truncated)
Changelog
Sourced from org.rocksdb:rocksdbjni's changelog.
9.1.1 (04/17/2024)
Bug Fixes
- Fixed Java
SstFileMetaDatato prevent throwingjava.lang.NoSuchMethodError- Fixed a regression when
ColumnFamilyOptions::max_successive_merges > 0where the CPU overhead for deciding whether to merge could have increased unless the user had set the optionColumnFamilyOptions::strict_max_successive_merges9.1.0 (03/22/2024)
New Features
- Added an option,
GetMergeOperandsOptions::continue_cb, to give users the ability to endGetMergeOperands()'s lookup process before all merge operands were found.- *Add sanity checks for ingesting external files that currently checks if the user key comparator used to create the file is compatible with the column family's user key comparator. *Support ingesting external files for column family that has user-defined timestamps in memtable only enabled.
- On file systems that support storage level data checksum and reconstruction, retry SST block reads for point lookups, scans, and flush and compaction if there's a checksum mismatch on the initial read.
- Some enhancements and fixes to experimental Temperature handling features, including new
default_write_temperatureCF option and opening anSstFileWriterwith a temperature.WriteBatchWithIndexnow supports wide-column point lookups via theGetEntityFromBatchAPI. See the API comments for more details.- *Implement experimental features: API
Iterator::GetProperty("rocksdb.iterator.write-time")to allow users to get data's approximate write unix time and write data with a specific write time viaWriteBatch::TimedPutAPI.Public API Changes
- Best-effort recovery (
best_efforts_recovery == true) may now be used together with atomic flush (atomic_flush == true). The all-or-nothing recovery guarantee for atomically flushed data will be upheld.- Remove deprecated option
bottommost_temperature, already replaced bylast_level_temperature- Added new PerfContext counters for block cache bytes read - block_cache_index_read_byte, block_cache_filter_read_byte, block_cache_compression_dict_read_byte, and block_cache_read_byte.
- Deprecate experimental Remote Compaction APIs - StartV2() and WaitForCompleteV2() and introduce Schedule() and Wait(). The new APIs essentially does the same thing as the old APIs. They allow taking externally generated unique id to wait for remote compaction to complete.
- *For API
WriteCommittedTransaction::GetForUpdate, if the column family enables user-defined timestamp, it was mandated that argumentdo_validatecannot be false, and UDT based validation has to be done with a user set read timestamp. It's updated to make the UDT based validation optional if user setsdo_validateto false and does not set a read timestamp. With this,GetForUpdateskips UDT based validation and it's users' responsibility to enforce the UDT invariant. SO DO NOT skip this UDT-based validation if users do not have ways to enforce the UDT invariant. Ways to enforce the invariant on the users side include manage a monotonically increasing timestamp, commit transactions in a single thread etc.- Defined a new PerfLevel
kEnableWaitto measure time spent by user threads blocked in RocksDB other than mutex, such as a write thread waiting to be added to a write group, a write thread delayed or stalled etc.RateLimiter's API no longer requires the burst size to be the refill size. Users ofNewGenericRateLimiter()can now provide burst size insingle_burst_bytes. Implementors ofRateLimiter::SetSingleBurstBytes()need to adapt their implementations to match the changed API doc.- Add
write_memtable_timeto the newly introduced PerfLevelkEnableWait.Behavior Changes
RateLimiters created byNewGenericRateLimiter()no longer modify the refill period whenSetSingleBurstBytes()is called.- Merge writes will only keep merge operand count within
ColumnFamilyOptions::max_successive_mergeswhen the key's merge operands are all found in memory, unlessstrict_max_successive_mergesis explicitly set.Bug Fixes
- Fixed
kBlockCacheTierreads to returnStatus::Incompletewhen I/O is needed to fetch a merge chain's base value from a blob file.- Fixed
kBlockCacheTierreads to returnStatus::Incompleteon table cache miss rather than incorrectly returning an empty value.- Fixed a data race in WalManager that may affect how frequent PurgeObsoleteWALFiles() runs.
- Re-enable the recycle_log_file_num option in DBOptions for kPointInTimeRecovery WAL recovery mode, which was previously disabled due to a bug in the recovery logic. This option is incompatible with WriteOptions::disableWAL. A Status::InvalidArgument() will be returned if disableWAL is specified.
Performance Improvements
- Java API
multiGet()variants now take advantage of the underlying batchedmultiGet()performance improvements. BeforeBenchmark (columnFamilyTestType) (keyCount) (keySize) (multiGetSize) (valueSize) Mode Cnt Score Error Units MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 6315.541 ± 8.106 ops/s MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 25 6975.468 ± 68.964 ops/sAfter
Benchmark (columnFamilyTestType) (keyCount) (keySize) (multiGetSize) (valueSize) Mode Cnt Score Error Units MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 64 thrpt 25 7046.739 ± 13.299 ops/s MultiGetBenchmarks.multiGetList10 no_column_family 10000 16 100 1024 thrpt 25 7654.521 ± 60.121 ops/s
... (truncated)
Commits
6f7cabeupdate version.h and HISTORY.md for 9.1.1adb9bf5Fixmax_successive_mergescounting CPU overhead regression (#12546)7dd5e9112474 history entrye94141dFix exception on RocksDB.getColumnFamilyMetaData() (#12474)bcf88d4Skip io_uring feature test when building with fbcode (#12525)f6d01f0Don't swallow errors in BlockBasedTable::MultiGet (#12486)e223cd4Branch cut 9.1.fbc449867MultiCfIterator Impl Follow up (#12465)b515a5dReplace ScopedArenaIterator with ScopedArenaPtr<InternalIterator> (#12470)3b736c4Fix heap use after free error on retry after checksum mismatch (#12464)- Additional commits viewable in compare view
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot mergewill merge this PR after your CI passes on it@dependabot squash and mergewill squash and merge this PR after your CI passes on it@dependabot cancel mergewill cancel a previously requested merge and block automerging@dependabot reopenwill reopen this PR if it is closed@dependabot closewill close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)