Adding VPC Prefix list to kubernetesApiAccess block returns malformed CIDR block error
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
kops version
1.23.2
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
1.19.16
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue? Ran a Terraform apply to update a cluster template config.
5. What happened after the commands executed?
error running task "SecurityGroupRule/icmp-pmtu-api-elb-pl-xxxxx" (9m43s remaining to succeed): error creating SecurityGroupIngress: │ InvalidParameterValue: CIDR block pl-xxx is malformed │ status code: 400, request id: 5c23a321-8acb-41c3-b25f-8d00d8652ea5
6. What did you expect to happen?
Exepected the prefix list to be added to the kubernetsApiAccess and sshAccess specs.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
generation: 1
name: xxxxxxxxxxxxxxxxx
spec:
additionalPolicies:
master: |
[
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::xxxxxxxxxxxxxxxxx"
]
},
{
"Effect": "Allow",
"Action": [
"s3:Get*"
],
"Resource": [
"arn:aws:s3:::xxxxxxxxxxxxxxxxx/*"
]
}
]
node: |
[
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"*"
]
}
]
addons:
- manifest: s3://xxxxxxxxxxxxxxxxx/vault-user-channel/vault-user-stable.yaml
- manifest: s3://xxxxxxxxxxxxxxxxx/kubelet-api-channel/kubelet-api-stable.yaml
- manifest: s3://xxxxxxxxxxxxxxxxx/tutor-admin-binding-channel/tutor-admin-binding-stable.yaml
- manifest: s3://xxxxxxxxxxxxxxxxx/cert-manager-crds-channel/cert-manager-crds-stable.yaml
- manifest: s3://xxxxxxxxxxxxxxxxx/flux-crds-channel/flux-crds-stable.yaml
- manifest: s3://xxxxxxxxxxxxxxxxx/storageclass-channel/storageclass-stable.yaml
api:
loadBalancer:
class: Classic
type: Internal
authorization:
rbac: {}
channel: stable
cloudLabels:
bc: tutor
creator: xxxxxxxxxxxxxxxxx
env: staging
k8s.io/cluster-autoscaler/xxxxxxxxxxxxxxxxx: owned
org: engineering
source: terraform
team: tutor
cloudProvider: aws
configBase: s3://xxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxx
dnsZone: xxxxxx
docker:
registryMirrors:
- https://mirror.gcr.io
etcdClusters:
- cpuRequest: "1"
etcdMembers:
- encryptedVolume: true
instanceGroup: isd-master-us-west-2a
name: a
- encryptedVolume: true
instanceGroup: isd-master-us-west-2b
name: b
- encryptedVolume: true
instanceGroup: isd-master-us-west-2c
name: c
memoryRequest: 1Gi
name: main
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: isd-master-us-west-2a
name: a
- encryptedVolume: true
instanceGroup: isd-master-us-west-2b
name: b
- encryptedVolume: true
instanceGroup: isd-master-us-west-2c
name: c
memoryRequest: 512Mi
name: events
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: isd-master-us-west-2a
name: a
- encryptedVolume: true
instanceGroup: isd-master-us-west-2b
name: b
- encryptedVolume: true
instanceGroup: isd-master-us-west-2c
name: c
manager:
env:
- name: ETCD_AUTO_COMPACTION_MODE
value: revision
- name: ETCD_AUTO_COMPACTION_RETENTION
value: "1000"
- name: ETCD_LISTEN_METRICS_URLS
value: http://0.0.0.0:8081
- name: ETCD_METRICS
value: basic
memoryRequest: 256Mi
name: cilium
externalPolicies:
master:
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
node:
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
admissionControl:
- AlwaysPullImages
cpuRequest: "1"
oidcClientID: kuberos
oidcGroupsClaim: groups
oidcIssuerURL: https://xxxxxxxxxxxxxxxxx/auth
kubeDNS:
provider: CoreDNS
kubeProxy:
enabled: false
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
imageGCHighThresholdPercent: 75
imageGCLowThresholdPercent: 50
kubeReserved:
cpu: 100m
memory: 256Mi
systemReserved:
cpu: 100m
memory: 256Mi
kubernetesApiAccess:
- xxx.xxx.xxx/xx
- pl-xxxxxxx
kubernetesVersion: 1.19.16
masterPublicName: api.xxxxxxxxxxxxxxxxx
networkCIDR: XXX.XXX.XXX/XX
networkID: vpc-xxx
networking:
cilium:
enableNodePort: true
enablePrometheusMetrics: true
enableRemoteNodeIdentity: false
etcdManaged: true
nonMasqueradeCIDR: xxx.xxx.xxx/xx
rollingUpdate:
maxSurge: 33%
sshAccess:
- XXX.XXX.XXX/XX
- pl-xxxxxxx
sshKeyName: nonprod-2022
subnets:
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: internal-us-west-2a
type: Private
zone: us-west-2a
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: internal-us-west-2b
type: Private
zone: us-west-2b
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: internal-us-west-2c
type: Private
zone: us-west-2c
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: external-us-west-2a
type: Utility
zone: us-west-2a
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: external-us-west-2b
type: Utility
zone: us-west-2b
- cidr: xxx.xxx.xxx/xx
id: subnet-xxxxxxxxxx
name: external-us-west-2c
type: Utility
zone: us-west-2c
topology:
dns:
type: Private
masters: private
nodes: private
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-06-21T14:28:56Z"
labels:
kops.k8s.io/cluster: xxxxxxxxxxxxxxxxx
roles: ""
name: isd-bc
spec:
additionalSecurityGroups:
- sg-xxxxx
- sg-xxxxx
cloudLabels:
k8s.io/cluster-autoscaler/enabled: "1"
roles: xxxxxxxxxxxxxxxxx-bc-node
image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220509
machineType: t3.xlarge
maxSize: 18
minSize: 6
nodeLabels:
kops.k8s.io/instancegroup: nodes
roles/1: xxx-kubernetes-node
roles/2: xxx-bc-node
role: Node
rootVolumeSize: 100
subnets:
- internal-us-west-2a
- internal-us-west-2b
- internal-us-west-2c
suspendProcesses:
- AZRebalance
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-06-21T14:28:53Z"
labels:
kops.k8s.io/cluster: xxxxxxxxxxxxxxxxx
name: isd-master-us-west-2a
spec:
additionalSecurityGroups:
- sg-xxxxx
cloudLabels:
roles: xxxxxxxxxxxxxxxxx-master
image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220509
machineType: c5.xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-west-2a
roles/1: xxx-kubernetes-master
role: Master
rootVolumeSize: 100
subnets:
- internal-us-west-2a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-06-21T14:28:54Z"
labels:
kops.k8s.io/cluster: xxxxxxxxxxxxxxxxx
name: isd-master-us-west-2b
spec:
additionalSecurityGroups:
- sg-xxxxx
cloudLabels:
roles: xxxxxxxxxxxxxxxxx-master
image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220509
machineType: c5.xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-west-2b
roles/1: xx-kubernetes-master
role: Master
rootVolumeSize: 100
subnets:
- internal-us-west-2b
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-06-21T14:28:55Z"
labels:
kops.k8s.io/cluster: xxxxxxxxxxxxxxxxx
name: isd-master-us-west-2c
spec:
additionalSecurityGroups:
- sg-xxxxx
cloudLabels:
roles: xxxxxxxxxxxxxxxxx-master
image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220509
machineType: c5.xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-west-2c
roles/1: xxx-kubernetes-master
role: Master
rootVolumeSize: 100
subnets:
- internal-us-west-2c
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-06-21T14:28:56Z"
labels:
kops.k8s.io/cluster: xxxxxxxxxxxxxxxxx
name: isd-user
spec:
additionalSecurityGroups:
- sg-xxxxx
- sg-xxxxx
cloudLabels:
k8s.io/cluster-autoscaler/enabled: "1"
roles: xxxxxxxxxxxxxxxxx-user-node
image: ubuntu/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20220509
machineType: t3.xlarge
maxSize: 20
minSize: 6
nodeLabels:
kops.k8s.io/instancegroup: nodes
roles/1: xxx-kubernetes-node
roles/2: xxx-user-node
role: Node
rootVolumeSize: 250
subnets:
- internal-us-west-2a
- internal-us-west-2b
- internal-us-west-2c
suspendProcesses:
- AZRebalance
taints:
- xxx=user-node:NoSchedule
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
The commands are terraform apply commands, which the trace logs contain sensative information that shouldn't be shared. I can provide the logs upon request if needed.
9. Anything else do we need to know? based on PR searching it looks like this issue appers to be connected to this change. https://github.com/kubernetes/kops/compare/04be6937bef7da9c394b97662173a4a85d87d49f..86a48114d860e44af3aaba74714fd6b5460223d2
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.