cluster-api-provider-azure icon indicating copy to clipboard operation
cluster-api-provider-azure copied to clipboard

Azure load balancer ignores KubeadmControlPlanelocalAPIEndpoint bindPort configuration

Open BDworak opened this issue 3 years ago • 2 comments

/kind bug

What steps did you take and what happened: When deploying a Capi cluster, the KubeadmControlPlane CRD supports changing the apiserver port by setting the bindPort under the localAPIEndpoint. This will modify the apiserver deployment and make it run on the port defined in the bindPort: field.

apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
  name: my-capz-control-plane
  namespace: my-cluster-capz
spec:
  kubeadmConfigSpec:
   ---- Snipped ----
    initConfiguration:
      localAPIEndpoint:
        bindPort: 443
   ---- Snipped ----

When spinning up an Azure Cluster, with load balancer type Internal, the load balancer will spin up and be configured to point to port 6443 and does not take into account that the apiserver is running on custom configured port.

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureCluster
metadata:
  name: my-capz
  namespace: my-cluster-capz
spec:
     ---- Snipped ----
  networkSpec:
    apiServerLB:
      name: my-capz-internal-lb
      type: Internal
      frontendIPs:
        - name: apiserver-frontend
          privateIP: 10.10.10.10
   ---- Snipped ----

What did you expect to happen: This can be fixed in 2 different ways

  1. (ideal) The AzureCluster resource is aware of the bindPort configuration and spins up a load balancer pointing towards the bindPort configured in the KubeadmControlPlane.
  2. (best configuration control) Allow for the the frontendIPs object to accept a port: field to configure the custom port for the apiserver.

Environment:

  • cluster-api-provider-azure version: 1.2.0
  • Kubernetes version: (use kubectl version): 1.23.5
  • OS (e.g. from /etc/os-release): N/A

BDworak avatar May 25 '22 22:05 BDworak

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 23 '22 23:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Sep 22 '22 23:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 23 '22 00:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 23 '22 00:10 k8s-ci-robot