ingress-nginx
ingress-nginx copied to clipboard
[chart] let replicacount be configurable even if autoscaling is enabled
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
bash-5.1$ /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.1.1
Build: a17181e43ec85534a6fea968d95d019c5a4bc8cf
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
-------------------------------------------------------------------------------
Kubernetes version (use kubectl version
):
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.6-gke.300", GitCommit:"df413ee6225aa3fc539e18ca3464a48d723bd3ea", GitTreeState:"clean", BuildDate:"2022-01-24T09:29:08Z", GoVersion:"go1.16.12b7", Compiler:"gc", Platform:"linux/amd64"}
Environment: ? cloud, dev ?
-
Cloud provider or hardware configuration: Managed GKE cluster, with containerd running on Container-Optimized OS
-
How was the ingress-nginx-controller installed:
ingress-nginx-public ingress-nginx 3 2022-05-06 11:29:55.464868 +0200 CEST deployedingress-nginx-4.0.17 1.1.1
$ helm -n ingress-nginx get values ingress-nginx-public
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
enabled: false
autoscaling:
enabled: true
maxReplicas: 12
minReplicas: 3
targetCPUUtilizationPercentage: 60
targetMemoryUtilizationPercentage: 600
config:
disable-access-log: "true"
enable-metrics: "true"
enable-opentracing: true
forwarded-for-header: X-Real-IP
jaeger-collector-host: tempo-distributor.tracing
large-client-header-buffers: 8 8k
proxy-buffering: "off"
proxy-request-buffering: "off"
use-forwarded-headers: "true"
use-gzip: "false"
worker-processes: "4"
extraEnvs:
- name: GOMAXPROCS
value: "4"
extraVolumeMounts:
- mountPath: /etc/ingress-controller/ssl
name: ssl
- mountPath: /tmp
name: tmp
extraVolumes:
- emptyDir: {}
name: ssl
- emptyDir: {}
name: tmp
ingressClassResource:
default: true
enabled: true
name: nginx-public
metrics:
enabled: true
serviceMonitor:
additionalLabels:
prometheus: ingress-nginx-prometheus
enabled: true
replicaCount: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
service:
loadBalancerIP: <redacted>
type: LoadBalancer
defaultBackend:
enabled: true
rbac:
create: true
What happened: The replicacount only respected the autoscaler, which took some time to adjust based on the API latency to 3 replicas. This could cause issues later down the line.
What you expected to happen: For the initial replicacount to be 3 instead of just one. replicaCount and hpa aren't mutually exclusive
How to reproduce it: Create an ingress nginx with replicacount 3 and autoscaling enabled with 3 minimum replicas. At first, only one pod of the nginx controller will be created, and only after a slight delay the HPA will kick in to scale to minimum replica count.
Other notes: I think that removing the condition here could help: https://github.com/kubernetes/ingress-nginx/blob/ec1b01092ef2c2ff36fe296c91c45d9b2d394bbd/charts/ingress-nginx/templates/controller-deployment.yaml#L22
@fourstepper: This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-kind bug /kind feature
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Not rotten or stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.