cluster-api-ipam-provider-in-cluster
cluster-api-ipam-provider-in-cluster copied to clipboard
Migration of machine between ranges within the same globalinclusterippool
Hello IPAM provider community, Is there any simple way of migrating machines from one range to another within the same globalinclusterapiVersion? And then releasing the original range?
Eg. we want to migrate machines from 10.129.241.30-10.129.241.40 to 10.129.241.90-10.129.241.100. After migration, we want to free up the formerly used range 10.129.241.30-10.129.241.40.
Original object:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.30-10.129.241.40
gateway: 10.129.241.254
prefix: 23
I would expect adding another range:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.30-10.129.241.40
- 10.129.241.90-10.129.241.100
gateway: 10.129.241.254
prefix: 23
and then removing the original range:
ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: cluster-nclusterippool
spec:
addresses:
- 10.129.241.90-10.129.241.100
gateway: 10.129.241.254
prefix: 23
But that workflow is forbidden:
error: globalinclusterippools.ipam.cluster.x-k8s.io "cluster-inclusterippool" could not be patched: admission webhook "validation.globalinclusterippool.ipam.cluster.x-k8s.io" denied the request: pool addresses do not contain allocated addresses: [10.129.241.32-10.129.241.32 10.129.241.34-10.129.241.34]
IP addresses are reserved. I understand. I would expect IPAM to inform ClusterAPI to roll out new machines with new IP addresses/claims using the newly added range. But it looks like the only possibility is to create a completely new globalinclusterippol object.
I think the easiest way would be to create a new Pool with your new desired range. Then create a new MachineTemplate and reference that new pool. Let the rolling upgrade run and the old addresses should be freed up. Will only work with non-overlapping ranges though, or would require to manually block the addresses that are in use, and cleaning them up later.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale