[SPARK-53446][CORE] Optimize BlockManager remove operations with cached block mappings
What changes were proposed in this pull request?
Continue #52210. Introduced three concurrent hash maps to track block ID associations for optimize BlockManager remove operations by introducing cached mappings to eliminate O(n) linear scans.
Why are the changes needed?
Previously, removeRdd(), removeBroadcast(), and removeCache() required scanning all blocks in blockInfoManager.entries to find matches. This approach becomes a serious bottleneck when:
- Large block counts: In production deployments with millions or even tens of millions of cached blocks, linear scans can be prohibitively slow
- High cleanup frequency: Workloads that repeatedly create and discard RDDs or broadcast variables accumulate overhead quickly
The original removeRdd() method already contained a TODO noting that an additional mapping would be needed to avoid linear scans. This PR implements that improvement.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Existing tests BlockManagerSuite ensure that blockId is used and cleaned up normally.
Was this patch authored or co-authored using generative AI tooling?
No.
Different from #52210, the cache mapping is moved to BlockInfoManager, and write/remove synchronously with blockInfoWrappers.
This looks reasonable to me, I do want to go through and make sure there are not other cases where we make a new block that might get lost & I'd love some explicit tests here as well.
@holdenk Thank you. Could you please tell me how to design this unit test? Existing tests BlockManagerSuite already ensure that blockId is used and cleaned up normally.
Probably just check that the blocks are added.
@holdenk Can you help take a look again? Thanks.
thanks, merging to master!