kubernetes-ingress icon indicating copy to clipboard operation
kubernetes-ingress copied to clipboard

Add HPA

Open akuzni2 opened this issue 2 years ago • 8 comments

Is your feature request related to a problem? Please describe. We should allow users to set a flag to turn autoscaling on to create a basic HPA.

Describe the solution you'd like

  1. a setting in the controller for HPA min/max replicas, targets to scale on, and whether or not to turn it on.
  2. Ability to have custom metrics for the HPA since the defaults are cpu/memory. (san enhancement to the existing requirement 1. This would be a nice to have)

Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Manually create the HPA (i do this now) but there are many downsides to managing my own.

Additional context Add any other context or screenshots about the feature request here.

akuzni2 avatar Feb 28 '22 20:02 akuzni2

Hi @akuzni2 thanks for reporting!

Be sure to check out the docs while you wait for a human to take a look at this :slightly_smiling_face:

Cheers!

github-actions[bot] avatar Feb 28 '22 20:02 github-actions[bot]

can I assign this issue to myself and try to add a PR for it?

akuzni2 avatar Feb 28 '22 20:02 akuzni2

If you would like to begin the work to support HPA, I am all for it. cpu / memory should be good to begin with.

brianehlert avatar Feb 28 '22 20:02 brianehlert

@brianehlert currently I configured one in our prod environment. Before I implement it I'd like to iron out some of the issues I'm seeing.

One issue is that scaling up/down can cause 500 errors. Do you know why that might be? Will the ingress controller not drain connections to an Nginx instance before it's scaled down? For instance if I have a deployment - with 5 pods, and choose to scale it down manually kubectl scale deploy ... --replicas=4 it appears I sometimes see 500 errors briefly. this doesn't happen on every scale but only occasionally.

I am on helm chart version 0.10.1 so that could possibly be it? That's using v nginx/nginx-ingress:1.12.1

akuzni2 avatar Mar 01 '22 16:03 akuzni2

This project is on version 2.1 I think one that I am not fully aware of is how long of a drain duration that K8s will tolerate before it kills the pods because the scale in took too long.

brianehlert avatar Mar 01 '22 16:03 brianehlert

@brianehlert got it. I will see if I can update to latest and test again

akuzni2 avatar Mar 01 '22 16:03 akuzni2

Hi @akuzni2, we're interested in using an HPA with the controller. Any news on the 500 errors you're seeing?

unitygilles avatar Mar 28 '22 16:03 unitygilles

same here, this feature would be really useful for us, any updates?

philnichol avatar Jun 14 '22 16:06 philnichol

I believe this was addressed. https://github.com/nginxinc/kubernetes-ingress/pull/3276

brianehlert avatar Jun 22 '23 20:06 brianehlert