cluster-api-provider-azure
cluster-api-provider-azure copied to clipboard
Support AzureManagedCluster BYO vnet in different RG
/kind bug
[Before submitting an issue, have you checked the Troubleshooting Guide?]
What steps did you take and what happened: I'm creating a AKS-cluster and I've modified the AzureManagedControlPlane and specified
virtualNetwork:
name: EXISTING_VNET_NAME
cidrBlock: 10.18.22.0/24
subnet:
name: NAME_OF_SUBNET_IN_ABOVE_VNET
cidrBlock: 10.18.22.0/24
The names and cidr-ranges match exactly what already exists. The VNET is located in the same "location/region" but a different Resource-group.
When deployed a new VNET with the same name is created in the resource-group, although when using the Azure-portal i can actually select the existing VNET even when it's in a different resource-group.
What did you expect to happen: The cluster to reuse my manually prepared VNET,
Anything else you would like to add:
Environment:
- cluster-api-provider-azure version: 1.1.0
- Kubernetes version: (use
kubectl version
): 1.21.1 (kind-cluster) - OS (e.g. from
/etc/os-release
): Ubuntu 20.04.3 LTS
I did figure out how to do this using my config it looks like the following...
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureCluster
metadata:
name: mycluster
namespace: default
spec:
networkSpec:
vnet:
resourceGroup: netvnet_locked
name: mycluster_VNET
subnets:
- name: defaulthigh
role: control-plane
cidrBlocks:
- 192.168.36.128/25
- name: default
role: node
cidrBlocks:
- 192.168.36.0/25
I also make sure the resource group that contains the vnet is locked so nothing can remove it without a double click through the UI.
https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/main/docs/book/src/topics/custom-vnet.md Documentation reference I was using at least.
The VNET is located in the same "location/region" but a different Resource-group.
You would need to specify the vnet resourceGroup in the vnet spec as @dmlb2000 pointed out above.
I feel i need to point out this is the "Managed" version. That CRD does not allow a resource-group to be specified along the VNET.
Oh, apologies... the setup might be different than what I'm looking at... I'm not familiar with the Managed Control Plane part of the API.
No problem. I've also spent some time reading that page before realizing it didn't apply to my use-case.
Got it in that case this would be a feature request as the AzureManagedCluster does not currently support bringing your own vnet in a separate RG
/kind feature /retitle Support AzureManagedCluster BYO vnet in different RG
/area managed cluster
@CecileRobertMichon: The label(s) area/managed, area/cluster
cannot be applied, because the repository doesn't have them.
In response to this:
/area managed cluster
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/area managedclusters
@alexeldeib does AKS support BYO vnet in a separate RG?
I've BYO my own VNET with Ansible to deploy AKS, however you have to have permissions to peer to that VNET. The AKS API Server doesn't seem to run in the same IP space as the VNET. I backed off that configuration as running kubectl logs
or kubectl exec
never worked in that situation (i/o timeouts trying to talk to worker nodes port 10250).
I've created a working cluster (with VNET allowing access both way between pods and on-prem network (command is from a Makefile with variables, thus the $(VAR) notation, and some options might be reduntant)
az aks create \
--name $(AZURE_CLUSTER) \
--resource-group $(AZURE_RESOURCEGROUP) \
--location $(AZURE_REGION) \
--subscription $(AZURE_SUB) \
--enable-private-cluster \
--private-dns-zone none \
--admin-username ops \
--auto-upgrade-channel stable \
--dns-name-prefix $(AZURE_CLUSTER) \
--service-cidr 172.16.0.0/16 \
--dns-service-ip 172.16.0.10 \
--docker-bridge-address 172.17.0.1/16 \
--pod-cidr 172.18.0.0/15 \
--kubernetes-version $(AZURE_K8VERSION) \
--network-plugin kubenet \
--network-policy calico \
--node-count 1 \
--node-vm-size Standard_DS2_v2 \
--vm-set-type VirtualMachineScaleSets \
--os-sku Ubuntu \
--ssh-key-value "ssh-rsa....." \
--vnet-subnet-id $(AZURE_VNET) \
--enable-managed-identity \
--assign-identity $(shell jq '.id' secrets/$(AZURE_ID).json)
(the custom cidr-net is because the default overlaps our own networks.)
So. now I'd just like to create this using the Cluster-API 🙂
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Hey, we would need this as well for our use case, and would love to even contribute to this. How do I got about it ?
Hi @mjnovice, this was already fixed by https://github.com/kubernetes-sigs/cluster-api-provider-azure/pull/2667
Looks like this issue was a duplicate of https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/2587
The commit is currently in the main branch and planned to be released this week in v1.6.0 (https://github.com/kubernetes-sigs/cluster-api-provider-azure/milestone/28)
/close
@CecileRobertMichon: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.