karpenter-provider-aws
karpenter-provider-aws copied to clipboard
perf: clear instance type cache after ICE
Fixes #7443
Description
The instanceTypesCache in the InstanceType provider uses a complex cache key that includes the SeqNum of the unavailableOfferings cache. Since all entries of instanceTypesCache become invalid whenever unavailableOfferings is modified, the cache can be flushed in this case to reduce memory usage.
Changes to other parts of the instanceTypesCache key are not considered in this patch. It is likely that a similar issue could be triggered for example by dynamically updating the blockDeviceMappings of a nodeclass based on pod requirements. Since our setup most commonly modifies nodepool/nodeclass configurations in response to ICEs, this patch was sufficient to solve our memory usage issues.
How was this change tested?
A patched version of the Karpenter controller was deployed in a development environment, and memory usage during cluster scale-up has been tracked for a period of two weeks. Peak memory usage appears to be reduced by as much as 80% in situations where several ICEs occur in quick succession, completely eliminating previously seen OOM issues.
Does this change impact docs?
- [ ] Yes, PR includes docs updates
- [ ] Yes, issue opened: #
- [x] No
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Deploy Preview for karpenter-docs-prod canceled.
| Name | Link |
|---|---|
| Latest commit | eaa3f20152a837733857767abbb7a1eda69bb867 |
| Latest deploy log | https://app.netlify.com/sites/karpenter-docs-prod/deploys/67d83383e0af99000958cbb2 |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity.
Pull Request Test Coverage Report for Build 12281157741
Warning: This coverage report may be inaccurate.
This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
- For more information on this, see Tracking coverage changes with pull request builds.
- To avoid this issue with future PRs, see these Recommended CI Configurations.
- For a quick fix, rebase this PR at GitHub. Your next report should be accurate.
Details
- 16 of 16 (100.0%) changed or added relevant lines in 2 files are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage increased (+0.04%) to 65.067%
| Totals | |
|---|---|
| Change from base Build 12265951884: | 0.04% |
| Covered Lines: | 5748 |
| Relevant Lines: | 8834 |
💛 - Coveralls
@jesseanttila-cai Can you help me understand why the automatic cache clean-up does not cover the case you are describing? Defined Cache TTL:
- https://github.com/aws/karpenter-provider-aws/blob/main/pkg/operator/operator.go#L172
- https://github.com/aws/karpenter-provider-aws/blob/main/pkg/cache/cache.go By default, the instance type provider cache should clean-up the old keys after 6 minutes. Can you share some metrics of the memory difference you saw or a memory profile of the Karpenter controller
@engedaam
By default, the instance type provider cache should clean-up the old keys after 6 minutes
The automatic cache clean-up does indeed limit the effect of the issue, however in practice the number of ICE occurences within the TTL period can be large enough to have a very significant effect on memory usage. The graph shared in #7443 displays this behavior, including the effect of the cache TTL. The heap profile for the flamegraph in the comment is available here.
Snapshot successfully published to oci://021119463062.dkr.ecr.us-east-1.amazonaws.com/karpenter/snapshot/karpenter:0-aa24e89e4d83300d86b07f41d2421dc000faa849.
To install you must login to the ECR repo with an AWS account:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 021119463062.dkr.ecr.us-east-1.amazonaws.com
helm upgrade --install karpenter oci://021119463062.dkr.ecr.us-east-1.amazonaws.com/karpenter/snapshot/karpenter --version "0-aa24e89e4d83300d86b07f41d2421dc000faa849" --namespace "kube-system" --create-namespace \
--set "settings.clusterName=${CLUSTER_NAME}" \
--set "settings.interruptionQueue=${CLUSTER_NAME}" \
--set controller.resources.requests.cpu=1 \
--set controller.resources.requests.memory=1Gi \
--set controller.resources.limits.cpu=1 \
--set controller.resources.limits.memory=1Gi \
--wait
@engedaam I ended up rewriting this to properly define the desired behavior for multiple concurrent callers, synchronizing flushes and insertions to keep outdated entries out of the cache. There is one more change I could make to further clean up the implementation, described in https://github.com/aws/karpenter-provider-aws/pull/7517#discussion_r1928420429. Other than that everything should be good to go, as I don't believe there is a reliable way to automatically test for the kind of concurrency issues that these latest changes are supposed to prevent.
@jesseanttila-cai I see we have a merge conflict here. Seems like the change you are proposing has taken in as part a separate CR including unrelated changes. https://github.com/aws/karpenter-provider-aws/commit/eff767c9754db6af33fce191e3d23fde65d7d11b#diff-247e625026e72481e1383a06fca4266d590c5950f058c29d6396837aef045827R171. This has already been released Karpenter v1.3, let me know if you think this does not achieve the goal of the PR.
@engedaam I was able to reproduce the issue with the 1.3.2 controller (2.5x memory usage after 8 ICEs):
The effect is quite a bit weaker now, since it seems the issue no longer affects the instancetype cache. The has moved to the newly added offering cache, which stores less data for each key.
Since the offering cache includes an Available field for each cached offering, it should now be possible to remove unavailableOfferings.SeqNum from the cache key, and instead simply update the cached offerings when their availability changes. This allows the cache to remain valid after ICEs, though there might be some data race safety issues with this that I'll have to look into.
I will report back once I've tested and rebased this PR with the new solution, hopefully the third iteration will finally be the charm
@engedaam The new patch seems to work even better than expected, memory usage is surprisingly stable during cluster scale up now:
I ended up doing a selective shallow clone of the offerings cache after each ICE, which should prevent any data race issues without requiring the entire cache to be reallocated every time.
Closing since we superseded this with #https://github.com/aws/karpenter-provider-aws/pull/8304