cluster-api-provider-proxmox
cluster-api-provider-proxmox copied to clipboard
Support DHCP
Describe the solution you'd like [A clear and concise description of what you want to happen.]
Our current setup is based on IPAM and static IP Allocation. Since the QEMU-guest-agent is pre-installed, we can support dhcp in the network-config.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
PLEASE NOTE: since we rely on the kube-vip for control planes, the CONTROL_PLANE_ENDPOINT shall remain static and must be set when creating a new cluster.
Environment:
- Cluster-api-provider-proxmox version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
This could be a good first candidate feature to port from https://github.com/k8s-proxmox/cluster-api-provider-proxmox.
Proposal
Since introducing a new field can be a breaking change, we probably need to introduce a new API version.
There are three proposals as follows.
(1) v1alpha2 with DHCP option
In this case, v1alpha2 will be the default and the storage version Adding:
spec.ipv4Config.dhcpspec.ipv6Config.dhcp
The ipam pool config will now be wrapped into static
The conversation webhook will convert v1alpha1 resources.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
ipv4Config:
dhcp: true
dnsServers: [10.1.1.1]
---
## OR
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
ipv4Config:
static:
addresses: ["10.10.10.5-10.10.10.60"]
prefix: 24
gateway: 10.10.10.1
dnsServers: [10.1.1.1]
Apart from the changes in the ProxmoxCluster, a new field can also be introduced to the machines.
kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
metadata:
name: "test-control-plane"
spec:
template:
spec:
sourceNode: "pve"
templateID: 100
format: "qcow2"
full: true
network:
default:
bridge: vmbr1
model: virtio
dhcp4: true # This will force the machine to use DHCP instead of the cluster static config
dhcp6: true # This will cause the machine to use DHCP instead of the cluster static config
additionalDevices:
- name: net1
bridge: vmbr2
model: virtio
dhcp4: true. # Either DHCP or ipam pool config must be defined
dhcp6: true. # Either DHCP or ipam pool config must be defined
(2) v1alpha1 with DHCP option
In this case the API is backward compatible and will not need to introduce v1alpha2.
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
metadata:
name: "test-cluster"
spec:
controlPlaneEndpoint:
host: 10.10.10.3
port: 6443
dhcp4: true # replaces ipv4config
dhcp6: true # replaces ipv6config
dnsServers: [10.1.1.1]
(3) machine v1alpha1 with DHCP option
in this case DHCP option is only available in the machine, while the cluster will set IP pool config as optional. this is also a backward compatible
kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
metadata:
name: "test-control-plane"
spec:
template:
spec:
sourceNode: "pve"
templateID: 100
format: "qcow2"
full: true
network:
default:
bridge: vmbr1
model: virtio
dhcp4: true # This will force the machine to use DHCP instead of the cluster static config
dhcp6: true # This will cause the machine to use DHCP instead of the cluster static config
additionalDevices:
- name: net1
bridge: vmbr2
model: virtio
dhcp4: true. # Either DHCP or ipam pool config must be defined
dhcp6 true. # Either DHCP or ipam pool config must be defined
For any further implementation, please suggest.
This require your attention @avorima
Just a thought: A user might want to have control-plane nodes addressed with static IPs, while at the same time not wanting to worry about worker nodes and use DHCP there. Or just globally want to enable DHCP for everything.
So Option 1 might be my favorite.
I think the first option of extending the cluster ipv4/6config fields makes the most semantic sense, but I wouldn't use the word static. It's IPAM after all, so it's not the same as providing static config. Maybe just creating a new struct with all the fields from the InClusterIPPoolSpec and adding a dhcp boolean field. This way you could also get rid of the ipam provider dependency in the api package.
As for the machines, adding the dhcp4/6 field to the Network struct is probably better than relying on the absence of data. There are a few ways of handling defaults here, so it may be desirable to force users to enable dhcp or provide an ipam ref. Especially considering we support dual stack. This wouldn't necessarily be a breaking change since right now the ipam refs are required in the additional network devices.
Another option would be to use the cluster's ip config as template for defaults, i.e. cluster has dual-stack, machine uses dhcp dual-stack for additional NICs.
Here are a few permutations of cluster ip config and machine addtional network devices to consider:
- cluster none, machine none
- cluster ipam, machine none
- cluster dhcp, machine none
- cluster none, machine ipam
- cluster none, machine dhcp
These are meant to be taken as the config that is given at the point of resource creation, so before defaults are applied. I think the first option should probably be invalid, while others could default to using dhcp for ipv4/6 or dual-stack.
We could have this in Go and probably get rid of *ipamicv1.InClusterIPPoolSpec, as you stated.
Or embed it into the IPConfig.
// IPConfig contains information about available IP config.
type IPConfig struct {
// DHCP indicates whether DHCP is enabled for the machines.
// DHCP is mutually exclusive with IPv4Config static addresses.
// If DHCP is true, IPv4Config and IPv6Config must not be set.
// +optional
// +kubebuilder:default=false
DHCP bool `json:"dhcp,omitempty"`
// Static contains information about available static IP pools.
// Static is mutually exclusive with DHCP.
// +optional
// +kubebuilder:validation:XValidation:rule="self.addresses.size() > 0",message="IP Config addresses must be provided"
Static *ipamicv1.InClusterIPPoolSpec `json:"static,omitempty"`
}
So, you also suggest adding DHCP options to the machines? and do you agree with moving to v1alpha2?
Yes, I think the changes warrant a new API version. Machines should definitely also be extended with DHCP config for both ipv4 and ipv6.
I was just saying that the additions to the API types should be easily convertible and also be straightforward in their usage. That's why I stated these possible combinations of configs that should be considered
Would it be possible to add vlan tag support at the same time?
@mkamsikad2 that could go to a separate PR,