Compaction Error: 'File is too large for PlainTableReader!' cannot figure out correct configurations
Hi Team
When upgrading from RocksDB version 6.27.3 to 9.10.0
I receive the following warning & errors
** Compaction error: Not implemented: File is too large for PlainTableReader! and then a bunch more compaction/flush File is too large for PlainTableReader! errors.
I have tried setting cf_options.level_compaction_dynamic_level_bytes = false; but still received the same PlainTable error.
- Can you help me figure out how to fix this so we can upgrade to the latest version? I have tried multiple tunings and configurations to try to make SST files smaller, but none have had reasonable write speeds and have avoided the issues.
- According to the documentation on Background Errors it says flush/compaction background errors should have automatic recovery when WAL is disabled. We are not seeing that behavior. Is there something that needs to be set to make sure this happens?
- Is there a way to get RocksDB to figure out optimal size for SST files during compaction/flush if the original batch size is under the PlainTable Limit?
Please let me know if there is more information I can provide.
Here is a snippet of the LOG File from the beginning of the section where this occurs: [ 0, 1 ] 5392014 100.000% 100.000% #################### 2025/03/17-15:50:15.609961 2306 [db/compaction/compaction_job.cc:1672] [AColumnFamilyName] [JOB 1625] Generated table #1027: 5392014 keys, 2297027063 bytes, temperature: kUnknown 2025/03/17-15:50:15.610030 2306 EVENT_LOG_v1 {"time_micros": 1742226615609999, "cf_name": "AColumnFamilyName", "job": 1625, "event": "table_file_creation", "file_number": 1027, "file_size": 2297027063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 541482978, "largest_seqno": 1239667735, "table_properties": {"data_size": 2242163868, "index_size": 41382261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 0, "index_value_is_delta_encoded": 0, "filter_size": 13480128, "raw_key_size": 86272224, "raw_average_key_size": 16, "raw_value_size": 2145123680, "raw_average_value_size": 397, "num_data_blocks": 1, "num_entries": 5392014, "num_filter_entries": 0, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 8, "filter_policy": "", "column_family_name": "AColumnFamilyName", "column_family_id": 1, "comparator": "", "user_defined_timestamps_persisted": 1, "key_largest_seqno": 18446744073709551615, "merge_operator": "", "prefix_extractor_name": "rocksdb.FixedPrefix.8", "property_collectors": "", "compression": "", "compression_options": "", "creation_time": 0, "oldest_key_time": 0, "newest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "23f4a50e-a2b2-434c-af71-a6d4782dbeec", "db_session_id": "XLAUC8XY75T1ZMD8Z3NH", "orig_file_number": 1027, "seqno_to_time_mapping": "N/A"}} 2025/03/17-15:50:15.615068 2306 [WARN] [db/db_impl/db_impl_compaction_flush.cc:4022] Compaction error: Not implemented: File is too large for PlainTableReader! 2025/03/17-15:50:15.615077 2306 [WARN] [db/error_handler.cc:393] Background IO error Not implemented: File is too large for PlainTableReader!, reason 1 2025/03/17-15:50:15.615083 2306 [db/error_handler.cc:279] ErrorHandler: Set regular background error 2025/03/17-15:50:15.615141 2306 (Original Log Time 2025/03/17-15:50:15.615027) [db/compaction/compaction_job.cc:933] [AColumnFamilyName] compacted to: base level 3 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 4 16 52] max score 0.68, estimated pending compaction bytes 6139670158, MB/sec: 350.4 rd, 350.4 wr, level 5, files in(4, 1) filtered(0, 0) out(1 +0 blob) MB in(517.1, 1673.6 +0.0 blob) filtered(0.0, 0.0) out(2190.6 +0.0 blob), read-write-amplify(8.5) write-amplify(4.2) Not implemented: File is too large for PlainTableReader!, records in: 539201 2025/03/17-15:50:15.615148 2306 (Original Log Time 2025/03/17-15:50:15.615058) EVENT_LOG_v1 {"time_micros": 1742226615615039, "job": 1625, "event": "compaction_finished", "compaction_time_micros": 6555405, "compaction_time_cpu_micros": 6554983, "output_level": 5, "num_output_files": 1, "total_output_size": 2297027063, "num_input_records": 5392014, "num_output_records": 5392014, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 4, 16, 52]} 2025/03/17-15:50:15.615152 2306 [ERROR] [db/db_impl/db_impl_compaction_flush.cc:3450] Waiting after background compaction error: Not implemented: File is too large for PlainTableReader!, Accumulated background error counts: 1 2025/03/17-15:50:15.615197 2385 [db/db_impl/db_impl_compaction_flush.cc:1976] [BColumnFamilyName] Manual flush finished, status: Not implemented: File is too large for PlainTableReader!
Our DB Options:
Options options;
options.wal_dir = kDBWalPath;
options.IncreaseParallelism(static_cast
ColumnFamilyOptions: cf_options.bloom_locality = 1U; cf_options.target_file_size_base = kFileSizeLimit; cf_options.memtable_whole_key_filtering = true; cf_options.memtable_prefix_bloom_size_ratio = .25; cf_options.memtable_factory.reset(NewHashSkipListRepFactory()); cf_options.level0_file_num_compaction_trigger = 20; cf_options.target_file_size_multiplier = 1; cf_options.write_buffer_size = 256 << 20; cf_options.max_write_buffer_number = 4; cf_options.level0_slowdown_writes_trigger = 20; cf_options.level0_stop_writes_trigger = 20; cf_options.comparator = get_comparator(type); cf_options.prefix_extractor.reset(get_prefix_extractor(type)); cf_options.table_factory.reset(NewPlainTableFactory(get_plain_table_options(type)));