elemental
elemental copied to clipboard
Clarify a few things in quickstart guide
Hey there. If I may add a few comments to the quickstart guide:
- The first part says "Follow this guide to have an auto-deployed cluster via rke2/k3s and managed by Rancher". Then at the end of the quickstart.md file, you say: You can now boot your nodes with this ISO, and they will:
- Boot from the ISO
- Register with the registrationURL given and create a per-machine
#!yaml MachineInventory
- Install Elemental Teal to the given device
- Restart
- Auto-deploy the cluster via k3s
But we cannot see any options to deploy rke2 instead of k3s? Maybe I misunderstood?
- The other thing, you state in the requirements that we need an active Rancher cluster V2.6.6 - but can we clarify if/how we can actually build a rancher cluster using elemental like we used to be able to do in os2 before all the changes towards elemental?
Thanks!
But we cannot see any options to deploy rke2 instead of k3s? Maybe I misunderstood?
Take a closer look at cluster.yaml
, there's
kubernetesVersion: v1.23.7+k3s1
at the end. This is where you can influence the version and if it'll be k3s or rke2 😉 (Just be aware that Kubernetes 1.24.x isn't supported by Rancher v2.6.6 yet)
2. The other thing, you state in the requirements that we need an active Rancher cluster V2.6.6 - but can we clarify if/how we can actually build a rancher cluster using elemental like we used to be able to do in os2 before all the changes towards elemental?
I'm not sure you could "build from scratch" with - what we call now - "the old stack (aka os2)". The primary goal (as of now) for Elemental is to deploy bare-metal at the edge under the control of a "management cluster".
That said, there's nothing in the current architecture that would prevent deployment without a management cluster, we just haven't put any focus on this use-case yet.
Thank you for your reply. Thanks for clarifying for rke2/k3s.
Yes, I was able to build from scratch a management cluster with the old stack (rancherd?). There were options to deploy a fully functional multicluster rancher management cluster. With that said, a better formulated question would be "Can we build a multicluster rancher management cluster from scratch with elemental and if so - how?". From your comment, I understand, there wasn't any focus on this use case yet.
That said, i'm really excited about Elemental.
Regards,
Well, if it was possible with os2, it should be possible with Elemental. The architecture in general hasn't changed, we "just" renamed some pieces
- ros-installer is now elemental-register + elemental-cli
- rancherd is now elemental-system-agent
- rancheros-operator is now elemental-operator
Also key names in the config yamls have changed, any "os2" or "rancheros" values are now "elemental".
Can you attach your old config files and a brief description of your workflow ? We can try to reproduce it using the "elemental" stack 🤞🏻
Sure. What I did was to boot from the iso, and create a config file like this one:
#cloud-config
rancheros:
install:
# An http://, https://, or tftp:// URL to load as the base configuration
# for this configuration. This configuration can include any install
# directives or OEM configuration. The resulting merged configuration
# will be read by the installer and all content of the merged config will
# be stored in /oem/99_custom.yaml in the created image.
# configURL: http://example.com/machine-cloud-config
# Turn on verbose logging for the installation process
debug: false
# The target device that will be formatted and grub will be install on.
# The partition table will be cleared and recreated with the default
# partition layout. If noFormat is set to true this parameter is only
# used to install grub.
device: /dev/sda
# If the system has the path /sys/firmware/efi it will be treated as a
# UEFI system. If you are creating an UEFI image on a non-EFI platform
# then this flag will force the installer to use UEFI even if not detected.
forceEFI: false
# If true then it is assumed that the disk is already formatted with the standard
# partitions need by RancherOS. Refer to the partition table section below for the
# exact requirements. Also, if this is set to true
noFormat: false
# After installation the system will reboot by default. If you wish to instead
# power off the system set this to true.
powerOff: true
# The installed image will set the default console to the current TTY value
# used during the installation. To force the installation to use a different TTY
# then set that value here.
tty: ttyS0
# Any other cloud-init values can be included in this file and will be stored in
# /oem/99_custom.yaml of the installed image
# Add additional users or set the password/ssh keys for root
users:
- name: "root"
passwd: "apassword"
shell: "/bin/bash"
# Assigns these keys to the first user in users or root if there
# is none
# ssh_authorized_keys:
# - asdd
# Run these commands once the system has fully booted
# runcmd:
# - foo
# Hostname to assign
hostname: "ahostname"
# Write arbitrary files
# write_files:
# - encoding: b64
# content: CiMgVGhpcyBmaWxlIGNvbnRyb2xzIHRoZSBzdGF0ZSBvZiBTRUxpbnV4
# path: /foo/bar
# permissions: "0644"
# owner: "bar"
# Rancherd configuration
write_files:
- container: ntp
path: /etc/ntp.conf
permissions: "0644"
owner: root
content: |
server 1.pool.ntp.org iburst
server 0.pool.ntp.org iburst
restrict default nomodify nopeer noquery limited kod
restrict 127.0.0.1
restrict [::1]
rancherd:
address: x.x.x.x
internalAddress: x.x.x.x
kubernetesVersion: stable:rke2
rancherVersion: stable
nodeName: nodename
rancherValues:
auditLog:
destination: sidecar
hostPath: /var/log/rancher/audit/
level: 0
maxAge: 1
maxBackup: 1
maxSize: 100
features: multi-cluster-management=true
hostPort: 0
hostname: ahostname
ingress:
enabled: true
hosts:
includeDefaultExtraAnnotations: true
extraAnnotations: {}
tls:
# options: rancher, letsEncrypt, secret
source: secret
secretName: tls-rancher-ingress
noDefaultAdmin: false
replicas: -3
tls: ingress
role: cluster-init
tlsSans:
- sometlssans
token: atoken
users:
- name: root
passwd: apassword
shell: /bin/bash
After this, if the config file was config.yaml, I ran ros-installer --config-file ./config.yaml - and this would build a 1 node Multi-Cluster management cluster. After that, I joined more nodes using similar config file during installation.
Thank you for your reply. Thanks for clarifying for rke2/k3s.
Good point in here, I will add a small excerpt below to mention how to change between rke2/k3s versions :+1:
A section was added to point out the different rk2/k3s versions, how to choose them and where to set them.