cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
AWSManagedMachinePool - Unable to set maxUnavailablePercentage when updateConfig not already set
Found an issue where the spec.updateConfig.maxUnavailablePercentage cannot be set on an AWSManagedMachinePool when there isnt a pre-existing spec.updateConfig.
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"infrastructure.cluster.x-k8s.io/v1beta2\",\"kind\":\"AWSManagedMachinePool\",\"metadata\":{\"annotations\":{},\"name\":\"capi-eks-quickstart-pool-us-east-1d\",\"namespace\":\"test-capi\"},\"spec\":{\"scaling\":{\"maxSize\":3,\"minSize\":1},\"updateConfig\":{\"maxUnavailablePercentage\":100}}}\n"}},"spec":{"updateConfig":{"maxUnavailablePercentage":100}}}
to:
Resource: "infrastructure.cluster.x-k8s.io/v1beta2, Resource=awsmanagedmachinepools", GroupVersionKind: "infrastructure.cluster.x-k8s.io/v1beta2, Kind=AWSManagedMachinePool"
Name: "capi-eks-quickstart-pool-us-east-1d", Namespace: "test-capi"
for: "capi-eks-quickstart.yaml": admission webhook "validation.awsmanagedmachinepool.infrastructure.cluster.x-k8s.io" denied the request: AWSManagedMachinePool.infrastructure.cluster.x-k8s.io "capi-eks-quickstart-pool-us-east-1d" is invalid: spec.updateConfig: Invalid value: v1beta2.UpdateConfig{MaxUnavailable:(*int)(0xc001a13b88), MaxUnavailablePercentage:(*int)(0xc001a13b90)}: cannot specify both maxUnavailable and maxUnavailablePercentage
Steps to reproduce.
- Create EKS cluster with AWSManagedMachinePool and no spec.updateConfig.
- Once cluster is up, add spec.updateConfig.maxUnavailablePercentage = 100 (value doesnt actually matter)
- Attempt to apply and you will receive the error above.
Workaround.
- using maxUnavailable will work. Once you have maxUnavailable applied, you can then re-apply with maxUnavailablePercentage successfully.
/triage accepted
Hello @ryan-dyer-sp
Can you please share the template you used?
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.