cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Adding custom ingress rules for port 6443 fails with error
/kind bug
What steps did you take and what happened: Referring to the fixes done for custom rules here: https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/4304, created a cluster with custom IP CIDR instead of allowing traffic everywhere for port 6443 The cluster specification looks as follows:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSCluster
metadata:
name: test-cluster-1
spec:
region: us-east-2
sshKeyName: sshKeyTemp
controlPlaneLoadBalancer:
crossZoneLoadBalancing: false
ingressRules:
- cidrBlocks:
- 192.168.12.22/24
description: ingress-test-1
fromPort: 6443
protocol: tcp
toPort: 6443
scheme: internet-facing
Error seen:
I0829 09:48:47.246237 1 logger.go:67] "Cluster infrastructure is not ready yet"
E0829 09:48:47.260597 1 controller.go:329] "Reconciler error" err=<
failed to authorize security group "sg-0c2909d6cbbf67f60" ingress rules: [protocol=tcp/range=[179-179]/description=bgp (calico) protocol=4/range=[-1-65535]/description=IP-in-IP (calico) protocol=tcp/range=[6443-6443]/description=Kubernetes API protocol=tcp/range=[2379-2379]/description=etcd protocol=tcp/range=[2380-2380]/description=etcd peer protocol=tcp/range=[6443-6443]/description=ingress-test-1]: InvalidParameterValue: The same permission must not appear multiple times
status code: 400, request id: c8e14459-3805-43fb-b107-e86acfa15355
> controller="awscluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSCluster" AWSCluster="default/test-cluster-1" namespace="default" name="test-cluster-1" reconcileID="53f246d4-9e6f-429c-a947-0df410a05808"
I0829 09:48:47.260817 1 logger.go:67] "Cluster infrastructure is not ready yet"
I0829 09:48:47.262557 1 logger.go:67] "Reconciling AWSCluster" controller="awscluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSCluster" AWSCluster="default/test-cluster-1" namespace="default" name="test-cluster-1" reconcileID="1cec16c0-ccd1-438b-a682-82cf0cef3d7b" cluster="default/test-cluster-1"
I0829 09:48:48.794886 1 logger.go:67] "Reconciling subnets" controller="awscluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSCluster" AWSCluster="default/test-cluster-1" namespace="default" name="test-cluster-1" reconcileID="1cec16c0-ccd1-438b-a682-82cf0cef3d7b" cluster="default/test-cluster-1"
E0829 09:48:52.009281 1 logger.go:83] "failed to reconcile security groups" err=<
failed to authorize security group "sg-0c2909d6cbbf67f60" ingress rules: [protocol=tcp/range=[179-179]/description=bgp (calico) protocol=4/range=[-1-65535]/description=IP-in-IP (calico) protocol=tcp/range=[6443-6443]/description=Kubernetes API protocol=tcp/range=[2379-2379]/description=etcd protocol=tcp/range=[2380-2380]/description=etcd peer protocol=tcp/range=[6443-6443]/description=ingress-test-1]: InvalidParameterValue: The same permission must not appear multiple times
status code: 400, request id: 46861e8f-3b09-4972-b03c-56a141eba516
> controller="awscluster" controllerGroup="infrastructure.cluster.x-k8s.io" controllerKind="AWSCluster" AWSCluster="default/test-cluster-1" namespace="default" name="test-cluster-1" reconcileID="1cec16c0-ccd1-438b-a682-82cf0cef3d7b" cluster="default/test-cluster-1"
I0829 09:48:52.061093 1 logger.go:67] "Cluster infrastructure is not ready yet"
What did you expect to happen: The ingress rule for api-server-lb should be created for given IP for port 6443
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-aws version: 2.2.1
- Kubernetes version: (use
kubectl version
): - OS (e.g. from
/etc/os-release
): ubuntu
/triage accepted /priority important-soon
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Deprioritize it with
/priority important-longterm
or/priority backlog
- Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten