cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Deleting EKS Cluster with BYO VPC when AWSManagedControlPlane.secondaryCidrBlock is set tries to delete network
/kind bug
What steps did you take and what happened:
When you create an AWSManagedControlPlane using BYO VPC and set the secondaryCidrBlock to a value, if you later delete the cluster it will attempts to delete the VPC.
In my case, the VPC already had this secondary CIDR block associated to it and I thought it still needed to be defined in the yaml manifest so that CAPA knew which was the secondary block vs the primary.
What did you expect to happen: The VPC resources are left alone since it is a BYO VPC (or the admission webhook blocks the manifest if this is an invalid configuration).
Anything else you would like to add: From the CAPA logs:
E0504 17:41:50.503466 1 logger.go:84] "error deleting network for AWSManagedControlPlane" err=<
InvalidCidrBlock.InUse: The vpc vpc-01fd<snip>420f currently has a subnet within CIDR block 100.64.0.0/16
status code: 400, request id: 766ebea6-8a43-4352-8a01-a4072d7ea6b7
> controller="awsmanagedcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="AWSManagedControlPlane" aWSManagedControlPlane="sbx-clusters/touge01" reconcileID=91aa659b-9834-401b-afe8-2375b74cdbd7 namespace="sbx-clusters" name="touge01"
I0504 17:41:50.503541 1 recorder.go:103] "events: Warning" object={Kind:AWSManagedControlPlane Namespace:sbx-clusters Name:touge01 UID:5e4a6a89-1c68-4a5f-bfce-4c5237b3e039 APIVersion:controlplane.cluster.x-k8s.io/v1beta2 ResourceVersion:215601986 FieldPath:} reason="FailedDisassociateSecondaryCidr" message=<
Failed disassociating secondary CIDR with VPC InvalidCidrBlock.InUse: The vpc vpc-01fd<snip>420f currently has a subnet within CIDR block 100.64.0.0/16
status code: 400, request id: 766ebea6-8a43-4352-8a01-a4072d7ea6b7
>
I0504 17:41:50.503566 1 recorder.go:103] "events: Warning" object={Kind:AWSManagedControlPlane Namespace:sbx-clusters Name:touge01 UID:5e4a6a89-1c68-4a5f-bfce-4c5237b3e039 APIVersion:controlplane.cluster.x-k8s.io/v1beta2 ResourceVersion:215601986 FieldPath:} reason="FailedDisassociateSecondaryCidr" message=<
Failed disassociating secondary CIDR with VPC InvalidCidrBlock.InUse: The vpc vpc-01fd<snip>420f currently has a subnet within CIDR block 100.64.0.0/16
status code: 400, request id: 766ebea6-8a43-4352-8a01-a4072d7ea6b7
>
Example yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: touge01
spec:
clusterNetwork:
pods:
cidrBlocks:
- 100.64.0.0/16
services:
cidrBlocks:
- 172.20.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: touge01
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedCluster
name: touge01
---
kind: AWSManagedCluster
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
metadata:
name: touge01
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
name: touge01
spec:
network:
vpc:
id: vpc-01fd<snip>420f
subnets:
- id: subnet-0f147<snip>1e95 # private-az1
- id: subnet-0ff4<snip>451e # private-az2
- id: subnet-0b7a<snip>b4d8 # private-az3
- id: subnet-01f5<snip>f82d # public-az1
- id: subnet-0577<snip>4d16 # public-az2
- id: subnet-0c1b<snip>3ac1 # public-az3
- id: subnet-0d5b<snip>a86 # pod-az1
- id: subnet-0ebc<snip>cc2b # pod-az2
- id: subnet-00d0<snip>99ec # pod-az3
secondaryCidrBlock: 100.64.0.0/16
Environment:
- Cluster-api-provider-aws version: 2.0.2
- Kubernetes version: (use
kubectl version): 1.25.9 - OS (e.g. from
/etc/os-release): macos 13.3.1
/triage accepted
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.