enhancements icon indicating copy to clipboard operation
enhancements copied to clipboard

Support memory qos with cgroups v2

Open xiaoxubeii opened this issue 3 years ago • 29 comments

Enhancement Description

  • One-line enhancement description (can be used as a release note): Support memory qos with cgroups v2
  • Kubernetes Enhancement Proposal: #2571
  • Discussion Link: https://docs.google.com/document/d/1r9Fx2omdMLP8SJ59np-C6uBqeuS4TSbQc5ChmQWMYJ8/edit?usp=sharing
  • Primary contact (assignee): @xiaoxubeii
  • Responsible SIGs: node
  • Enhancement target (which target equals to which milestone):
    • Alpha release target (x.y): v1.22
    • Beta release target (x.y):
    • Stable release target (x.y):
  • [x] Alpha
    • [x] KEP (k/enhancements) update PR(s): #2571
    • [x] Code (k/k) update PR(s): kubernetes/kubernetes#102578 kubernetes/kubernetes/pull/102970
    • [x] Docs (k/website) update PR(s): https://github.com/kubernetes/website/pull/28566

xiaoxubeii avatar Mar 14 '21 11:03 xiaoxubeii

/sig node

xiaoxubeii avatar Mar 14 '21 11:03 xiaoxubeii

/assign @xiaoxubeii

xiaoxubeii avatar Mar 14 '21 11:03 xiaoxubeii

Hi! This sounds really interesting and I'd love to help out, please let me know how I can help out with this!

MadhavJivrajani avatar Mar 25 '21 17:03 MadhavJivrajani

/stage alpha /milestone v1.22

ehashman avatar May 04 '21 18:05 ehashman

Hi @xiaoxubeii 👋 1.22 Enhancements Shadow here.

This enhancement is in good shape, some minor change requests in light of Enhancement Freeze on Thursday May 13th:

  • Update kep.yaml file to the latest template
  • In kep.yaml, status is currently provisional instead of implementable
  • Alpha graduation criteria missing
  • KEP not merged to master

Thanks!

gracenng avatar May 10 '21 17:05 gracenng

Hi @xiaoxubeii 👋 1.22 Enhancements shadow here.

To help SIG's be aware of their workload, I just wanted to check to see if SIG-Node will need to do anything for this enhancement and if so, are they OK with it? Thanks!

gracenng avatar May 11 '21 12:05 gracenng

@gracenng Hey grace, I have updated necessary contents as follows:

  • [x] update kep.yaml for prr approval
  • [x] add Alpha graduation criteria

sig-node approvers @derekwaynecarr @mrunalp are reviewing for that. I am waiting for lgtm/approve and merge as implementable.

xiaoxubeii avatar May 12 '21 02:05 xiaoxubeii

@gracenng sig-node approvers(Derek and Mrunal) have gave lgtm/approve. There are few prr review requests, I have updated and am waiting for next review round. I think we can catch up with the freeze day :)

xiaoxubeii avatar May 13 '21 01:05 xiaoxubeii

Hi @xiaoxubeii , looks like your PRR was approved and the requested changes are all here. I have updated the status of this enhancement to tracked Thank you for keeping me updated!

gracenng avatar May 13 '21 02:05 gracenng

Thanks for your help. Also thanks very much to a lot of valuable review suggestions and helps from @derekwaynecarr @mrunalp @bobbypage @giuseppe @odinuge @johnbelamaric @ehashman Really appreciate that :)

xiaoxubeii avatar May 13 '21 03:05 xiaoxubeii

Hello @xiaoxubeii 👋 , 1.22 Docs Shadow here.

This enhancement is marked as Needs Docs for 1.22 release. Please follow the steps detailed in the documentation to open a PR against dev-1.22 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Fri July 9, 11:59 PM PDT. Also, take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thank you!

ritpanjw avatar May 19 '21 06:05 ritpanjw

@ritpanjw OK, thanks for reminding.

xiaoxubeii avatar May 19 '21 06:05 xiaoxubeii

Hi @xiaoxubeii 🌞 1.22 enhancements shadow here.

In light of Code Freeze on July 8th, this enhancement current status is tracked, and we're currently tracking kubernetes/kubernetes#102578 kubernetes/kubernetes/pull/102970

Please let me know if there is other code PR associated with this enhancement.

Thanks

gracenng avatar Jun 23 '21 12:06 gracenng

Hi @xiaoxubeii 🌞 1.22 enhancements shadow here.

In light of Code Freeze on July 8th, this enhancement current status is tracked, and we're currently tracking kubernetes/kubernetes#102578 kubernetes/kubernetes/pull/102970

Please let me know if there is other code PR associated with this enhancement.

Thanks

@gracenng It is all here, thanks.

xiaoxubeii avatar Jun 24 '21 02:06 xiaoxubeii

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 17 '21 22:11 k8s-triage-robot

This kep has been released.

xiaoxubeii avatar Nov 18 '21 07:11 xiaoxubeii

This feature is alpha in v1.22. Any plan to make it beta?

BTW, I have some questions about the KEP.

  1. about the memoryThrottlingFactor factor, this is tricky. The kernel ability is not fully exposed to user API(I mean memoryThrottlingFactor is a node level setting; the app owner cannot set it in pod level.). memory.high is like a soft limit and memory.max is a hard limit. Currently, there is only resource.limit. No resource.softlimit. https://github.com/kubernetes/enhancements/pull/2571#discussion_r631195702 @derekwaynecarr 's comment on it.

Meanwhile, the OOM is controlled by the kernel. If kubelet handles a pod that uses memory that exceeds the limit, it can easily add an OOMKilled event. For kernel killing, kubelet cannot get that directly. If memory.high==resource.lmit, kubelet can kill the pod instead of kernel oom killing.

Is there a performance issue if the throttle factor is too small? For example, some pods like Java will always use ~85% memory and memory.high will take effect continuously. The processes of the cgroup are throttled and put under heavy reclaim pressure. Will this be a risk of this feature?

  1. memory.low vs memory.min: The kernel ability is not fully exposed to user API here as well. memory.low is like a soft limit and memory.request is a hard request. There is a comment in https://github.com/kubernetes/enhancements/pull/2571#pullrequestreview-658390067 @mrunalp. Currently, there is only resource.request. No resource.softrequest.

pacoxu avatar Oct 12 '22 09:10 pacoxu

This should not have been closed as the feature is merely alpha. It either needs to continue graduating or would be deprecated. There is more work to do in either case.

ehashman avatar Oct 13 '22 19:10 ehashman

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 12 '22 20:11 k8s-triage-robot

/remove-lifecycle rotten

What are the barriers to moving this forward to, for example, beta and off by default?

sftim avatar Nov 13 '22 17:11 sftim

Should we freeze this issue?

(I'd assume than all alpha and beta features that ship in Kubernetes should have their KEP issues frozen, so that we continue to track the associated work)

sftim avatar Nov 13 '22 17:11 sftim

This feature is alpha in v1.22. Any plan to make it beta?

BTW, I have some questions about the KEP.

  1. about the memoryThrottlingFactor factor, this is tricky. The kernel ability is not fully exposed to user API(I mean memoryThrottlingFactor is a node level setting; the app owner cannot set it in pod level.). memory.high is like a soft limit and memory.max is a hard limit. Currently, there is only resource.limit. No resource.softlimit. KEP-2570: Support memory qos with cgroups v2 #2571 (comment) @derekwaynecarr 's comment on it.

Yes. memory.high is more like a soft limit of memory, so we simply set (limits.memory or node allocatable memory)*memorThrottlingFactor in alpha version, which memorThrottlingFactor is 0.8 by default at this moment. In other words, we made (limits.memory or node allocatable memory) as resource.softlimit.

Meanwhile, the OOM is controlled by the kernel. If kubelet handles a pod that uses memory that exceeds the limit, it can easily add an OOMKilled event. For kernel killing, kubelet cannot get that directly. If memory.high==resource.lmit, kubelet can kill the pod instead of kernel oom killing.

kubelet will never KILL THE POD in this case, yet kernel does. Kernel will kill the container which memory usage is over limits.memory, set to memory.max in cgv2. There are no additional implementations of the kubelet in this KEP, other than setting correct values to cgv2.

Is there a performance issue if the throttle factor is too small? For example, some pods like Java will always use ~85% memory and memory.high will take effect continuously. The processes of the cgroup are throttled and put under heavy reclaim pressure. Will this be a risk of this feature? Yes, maybe. The default throttle factor which is .8 here is an experimental value. We will expose it to kubelet startup parameters when appropriate, maybe at beta stage.

  1. memory.low vs memory.min: The kernel ability is not fully exposed to user API here as well. memory.low is like a soft limit and memory.request is a hard request. There is a comment in KEP-2570: Support memory qos with cgroups v2 #2571 (review) @mrunalp. Currently, there is only resource.request. No resource.softrequest.

Actually memory.low is not yet used in the KEP.

xiaoxubeii avatar Nov 21 '22 09:11 xiaoxubeii

Just an idea on the factor.

  • code example(not tested yet): https://github.com/kubernetes/kubernetes/compare/master...pacoxu:cgroupv2-memory?expand=1

Current proposal is memory.high = memory.limit * kubelet memoryThrottlingFactor

However, this would be a problem when the requested memory is close to the memory limit, and the throttling is too easy to reach. When users want to set the factor smaller, it may make no throttling due to high<request not being accepted.

  • For instance, if the request memory is 500Mi and the limit is 1Gi and the factor is 0.6.
    • Then the memory.high would be 600Mi. Just 100Mi higher than the requested memory.
  • Another instance, if the request memory is 800Mi, and the limit is 1Gi, for factor 0.6, no memory.high will be set on this case.
  • If request memory is not set, the high.memory can be assigned to 600Mi

A new proposal would be to make the factor based on the requested memory. memory.high = memory.request + (memory.limit - memory.request) * kubelet memoryThrottlingFactor With the first example above,

  • The memory.high would be 500Mi + 300Mi = 800Mi, which would be a better factor working still.
  • For the second instance, if request memory is 800Mi, and the limit is 1Gi, for factor 0.6, memory.high will be 920Mi.
  • If request memory is not set, the high.memory can be set to 600Mi. No change for this case.

Is this a better factor design? I'm not sure if there are other scenarios for throttling the memory.

Limit 1000Mi\ Request \ factor current design: memory.high my proposal: memory.high
request 0Mi
factor 0.6
600Mi 600Mi
request 500Mi
factor 0.6
600Mi 800Mi
request 800Mi
factor 0.6
max 920Mi
request 1Gi
factor 0.6
max max
request 0Mi
factor 0.8
800Mi 800Mi
request 500Mi
factor 0.8
800Mi 900Mi
request 800Mi
factor 0.8
max 960Mi
request 500Gi
factor 0.4
max 700Mi
calculation memory.high method memory.limit * kubelet memoryThrottlingFactor memory.request + (memory.limit - memory.request) * kubelet memoryThrottlingFactor

pacoxu avatar Dec 02 '22 08:12 pacoxu

Current proposal is memory.high = memory.limit * kubelet memoryThrottlingFactor

how to pick the magic number memoryThrottlingFactor? I don't think a single value would work fine for all workloads.

I think it is better if these settings are exposed to the user.

IMHO, memory.high should be mapped to something like softrequest.

giuseppe avatar Dec 02 '22 15:12 giuseppe

I shared a bit more context about this feature in my and @mrunalp kubecon talk regarding cgroupv2.

One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting memory.high. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.

bobbypage avatar Dec 02 '22 20:12 bobbypage

One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting memory.high. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.

Yes, especially for some Java applications, they may not need it and it is necessary to have an option per pod for this.

BTW, as you mentioned, the memory.high may have side effects on some applications, the default 0.8 memoryThrottlingFactor may be not a good default value. I would like to set the default to 0 which means no memory.high should be set by default or set to a higher value like 0.99 or 0.9 to avoid performance issues.

pacoxu avatar Dec 05 '22 02:12 pacoxu

One the interesting pieces feedback I received about the feature is that some folks may have applications which are quite latency / performance sensitive and are always using very close to the memory limit. In those cases, customers would prefer to not have a soft memory limit set so their application does not get impacted by kernel memory reclaim when hitting memory.high. So one thing to consider is if we need some way to opt-out on a per pod basis, or some similar mechanism.

Yes, especially for some Java applications, they may not need it and it is necessary to have an option per pod for this.

BTW, as you mentioned, the memory.high may have side effects on some applications, the default 0.8 memoryThrottlingFactor may be not a good default value. I would like to set the default to 0 which means no memory.high should be set by default or set to a higher value like 0.99 or 0.9 to avoid performance issues.

Like bobby said, we need find a way to opt-out it on a per pod level, which means users can enable or disable it by setting something like soft request in pod.

xiaoxubeii avatar Dec 05 '22 07:12 xiaoxubeii

I tried to summarize things in https://docs.google.com/document/d/1p9awiXhu5f4mWsOqNpCX1W-bLLlICiABKU55XpeOgoA/edit?usp=sharing for discussions.

pacoxu avatar Dec 05 '22 08:12 pacoxu

I tried to summarize things in https://docs.google.com/document/d/1p9awiXhu5f4mWsOqNpCX1W-bLLlICiABKU55XpeOgoA/edit?usp=sharing for discussions.

We discussed it at SIG Node meeting on Dec 6th: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.ph8vc9vhlcxy with the summary: “Collect feedback in the doc and then open a KEP update with alternatives”, discussion with timestamp: https://youtu.be/t3PcHj62f0c?t=686

SergeyKanzhelev avatar Jan 10 '23 00:01 SergeyKanzhelev

This is now discussed in https://docs.google.com/document/d/1r-e_jWL5EllBgxtaNb9qI2eLOKf9wSuuuglTySWmfmQ/edit#heading=h.6nyswlzgrqq0.

I opened a draft PR https://github.com/kubernetes/kubernetes/pull/115371 accordingly.

pacoxu avatar Jan 28 '23 09:01 pacoxu