eks-anywhere
eks-anywhere copied to clipboard
Remove disk constraints in Tinkerbell provider
#3233 is a prereq to this issue.
The Tinkerbell provider imposes a constraint on customers that requires them to use machines with the same disk type in node groups (e.g. control plane nodes or worker node group 1 etc). This was because Tinkerbell templates didn't have access to the hardware associated with a workflow at time of rendering so required pre-populating by the EKS-A CLI before hardware was selected.
The latest changes to Tinkerbell feed hardware data (disks only currently) to templates rooted at .Hardware. A function for rendering full disk paths with a partition, called formatPartition, was added and supports block (/dev/sd) and NVMe (/dev/nvme) devices.
Example usage of disk partitioning function
formatPartition ( index .Hardware.Disks 0 ) 1
index .Hardware.Disks 0retrieves the first disk in the hardware disks slice retrieved from theHarwdareKubernetes object associated with the workflow.formatPartition <disk> 1formats the disk path with a partition 1.
formatPartition "/dev/sda" 1 # output: /dev/sda1
formatPartition "/dev/nvme0n1" 2 # output: /dev/nvme0n1p2