kops
kops copied to clipboard
Addons: disabling previously enabled plugin won't undeploy the addon
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
Client version: 1.28.1 (git-v1.28.1)
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.1-eks-2f008fe", GitCommit:"abfec7d7e55d56346a5259c9379dea9f56ba2926", GitTreeState:"clean", BuildDate:"2023-04-14T20:43:13Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.3", GitCommit:"a8a1abc25cad87333840cd7d54be2efaf31a3177", GitTreeState:"clean", BuildDate:"2023-10-18T11:33:18Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
1. kops edit cluster
2.
spec:
awsLoadBalancerController:
enabled: true --> false (or remove the entire option)
3. kops update --yes
4. kops rolling-update cluster --yes
5. What happened after the commands executed? awsLoadBalancerController is still up and running.
6. What did you expect to happen? All awsLoadBalancerController resources are deleted and corresponding AWS roles/policies removed
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2023-11-21T15:10:48Z"
generation: 4
name: cl1.a.b.com
spec:
api:
loadBalancer:
class: Network
type: Public
authorization:
rbac: {}
awsLoadBalancerController:
enabled: false
certManager:
enabled: true
hostedZoneIDs:
- AABBCCDD
channel: stable
cloudProvider: aws
configBase: s3://a-b-com-state-store/cl1.a.b.com
dnsZone: AABBCCDD
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-eu-central-1a
name: a
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-eu-central-1a
name: a
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
useServiceAccountExternalPermissions: true
kubeProxy:
enabled: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 1.1.1.1/32
- 2.2.2.2/32
kubernetesVersion: 1.28.3
masterPublicName: api.cl1.a.b.com
metricsServer:
enabled: true
networkCIDR: 10.240.0.0/16
networking:
cilium:
enableNodePort: true
ipam: eni
nodeTerminationHandler:
cpuRequest: 200m
enableRebalanceMonitoring: false
enableSQSTerminationDraining: true
enabled: true
managedASGTag: aws-node-termination-handler/managed
prometheusEnable: true
nonMasqueradeCIDR: 100.64.0.0/10
podIdentityWebhook:
enabled: true
serviceAccountIssuerDiscovery:
discoveryStore: s3://a-b-com-oidc-store/cl1.a.b.com
enableAWSOIDCProvider: true
snapshotController:
enabled: true
sshAccess:
- 1.1.1.1/32
- 2.2.2.2/32
sshKeyName: cl1.a.b.com
subnets:
- cidr: 10.240.64.0/18
name: eu-central-1a
type: Private
zone: eu-central-1a
- cidr: 10.240.128.0/18
name: eu-central-1b
type: Private
zone: eu-central-1b
- cidr: 10.240.192.0/18
name: eu-central-1c
type: Private
zone: eu-central-1c
- cidr: 10.240.0.0/21
name: utility-eu-central-1a
type: Utility
zone: eu-central-1a
- cidr: 10.240.8.0/21
name: utility-eu-central-1b
type: Utility
zone: eu-central-1b
- cidr: 10.240.16.0/21
name: utility-eu-central-1c
type: Utility
zone: eu-central-1c
topology:
bastion:
bastionPublicName: bastion.cl1.a.b.com
dns:
type: Public
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.