Alvin Lin
Alvin Lin
I don't think this is fixed, reopening.
There is a related [Prometheus issue](https://github.com/prometheus/prometheus/issues/5868) to support index file size bigger than 64Gi However, I think we shouldn't wait for that, and should have Cortex skip compaction for blocks...
After PR #4707 I still need to implement the part that auto skip compaction for blocks with humongous index.
Hmm looks like I can't just use Thanos `largeTotalIndexSizeFilter` for Cortex's `ShuffleShardingPlanner` because `ShuffleShardingPlanner` is coupled with non-exported `tsdbBasedPlanner` struct.
Another relevant Thanos issue https://github.com/thanos-io/thanos/issues/3068 to track the sharding work
@yeya24 you are right; closing this issue was a mistake. Thanks! I still have to learn to pay attention when merging PR and not which issue may get incorrectly closed...
@sandy2008 this looks interesting, are you able to fix the workflow errors, update `CHANGELOG.md`, and update documentation? I will take a look after workflow passes.
Since we remove chunk store, I don't think this PR is valid anymore. The blob client used would be from Thanos, so I am closing this PR.