Xu Han@AutoMQ

Results 48 comments of Xu Han@AutoMQ

- The `s3.block.cache.size` is the maximum size of data that BlockCache can cache. - BlockCache only caches the data that is unread or read-ahead. It will drop the non-useful data...

@jerome-j-20230331 The cached DataBlock will be evicted after (createTimestamp + 1min). So it may be caused by the consumer reading too slowly to consume the data that is pre-read from...

> [@superhx](https://github.com/superhx) Thank you very much for your answer. In fact, we are indeed facing a big consumption lag problem. We have been trying to adjust the block cache size...

The bug is caused by PR https://github.com/AutoMQ/automq/pull/2356 . The fast retry success before normal path and the normal path upload the dirty data.

Is there any other error log in controller.log?

Could you get more logs about object 6495919, 154573100

@jerome-j-20230331 1. Are there any logs related to object 6495919 before the first 'failed to mark destroy compacted' happens? 2. Could you try to rolling restart all the controller nodes...

It seams the S3Object was deleted but the S3StreamObject wasn't deleted. It should not happen. I will find the reason. You can delete streamId=4711 related topic to fix it.

It caused by SSO compaction force split. I think we should limit the number of Streams for an Object in DeltaWALUploadTask to less than 10,000 to avoid exceeding the limits...