k3s-ansible
k3s-ansible copied to clipboard
How to proceed for Version Upgrades or adding nodes
Hello
What is the process for upgrading the kubernetes version? Do I just have to change the version in group_vars and then run the playbook again or is this not supported with the current playbook.
Can I extend an installed cluster with more nodes. Do I just have to add more nodes to the inventory and then run the playbook again?
Thanks for your help.
I just replaced the version number in group_vars and ran the playbook again to upgrade and it works perfectly.
I had to restart nodes to receive the effect.
In my case, ansible also returned a message that is shown below.
In my case, ansible also returned a message that is shown below.
The error you're facing is already described in #188.
As for the upgrade: I think updating the variable and re-run the playbook is fine for now (see also https://github.com/k3s-io/k3s-ansible/issues/94#issuecomment-715667638). Please keep in mind, that any upgrade may break you system. Also check keep in mind that any bugfixes done in scripts should be mirrored on your system, too. For a safe upgrade path I suggest:
- Drain your node(s)
- Upgrade your nodes using k3s-ansible, restart them afterwards
- Drain your master(s) and upgrade them too, restart them afterwards
- make sure everything is up and running
How about addint new nodes? just editing the host.ini file and adding new masters, or nodes works?
And then running the sites.yml again?
Yes, just add them to your hosts.ini and rerun the playbook. I just did that a few weeks ago.
RodriMora @.***> schrieb am Di., 12. Juli 2022, 12:17:
How about addint new nodes? just editing the host.ini file and adding new masters, or nodes works?
— Reply to this email directly, view it on GitHub https://github.com/k3s-io/k3s-ansible/issues/186#issuecomment-1181582563, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEL456EBGXECD56UID5KWDVTVA3PANCNFSM5UDO6W7A . You are receiving this because you commented.Message ID: @.***>
i have now tried to upgrade my cluster via ansible, seems to work for now at first glance
from a 1 month old version with k3s v1.23.4+k3s1 to the current
but at the end of the metalb, an error occurs
Thursday 08 September 2022 13:34:25 +0200 (0:00:01.314) 0:01:48.716 ****
ok: [10.10.0.10] => (item=controller)
ok: [10.10.0.10] => (item=webhook service)
ok: [10.10.0.10] => (item=pods in replica sets)
failed: [10.10.0.10] (item=ready replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.readyReplicas}=1", "--timeout=60s"], "delta": "0:00:00.187160", "end": "2022-09-08 13:34:32.641412", "item": {"condition": "--for=jsonpath='{.status.readyReplicas}'=1", "description": "ready replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2022-09-08 13:34:32.454252", "stderr": "error: readyReplicas is not found", "stderr_lines": ["error: readyReplicas is not found"], "stdout": "replicaset.apps/controller-57fd9c5bb condition met", "stdout_lines": ["replicaset.apps/controller-57fd9c5bb condition met"]}
failed: [10.10.0.10] (item=fully labeled replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.fullyLabeledReplicas}=1", "--timeout=60s"], "delta": "0:00:00.215121", "end": "2022-09-08 13:34:33.171628", "item": {"condition": "--for=jsonpath='{.status.fullyLabeledReplicas}'=1", "description": "fully labeled replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2022-09-08 13:34:32.956507", "stderr": "error: fullyLabeledReplicas is not found", "stderr_lines": ["error: fullyLabeledReplicas is not found"], "stdout": "replicaset.apps/controller-57fd9c5bb condition met", "stdout_lines": ["replicaset.apps/controller-57fd9c5bb condition met"]}
failed: [10.10.0.10] (item=available replicas of controller) => {"ansible_loop_var": "item", "changed": false, "cmd": ["k3s", "kubectl", "wait", "replicaset", "--namespace=metallb-system", "--selector=component=controller,app=metallb", "--for=jsonpath={.status.availableReplicas}=1", "--timeout=60s"], "delta": "0:00:00.178730", "end": "2022-09-08 13:34:33.662630", "item": {"condition": "--for=jsonpath='{.status.availableReplicas}'=1", "description": "available replicas of controller", "resource": "replicaset", "selector": "component=controller,app=metallb"}, "msg": "non-zero return code", "rc": 1, "start": "2022-09-08 13:34:33.483900", "stderr": "error: availableReplicas is not found", "stderr_lines": ["error: availableReplicas is not found"], "stdout": "replicaset.apps/controller-57fd9c5bb condition met", "stdout_lines": ["replicaset.apps/controller-57fd9c5bb condition met"]}
I just replaced the version number in group_vars and ran the playbook again to upgrade and it works perfectly.
I had to restart nodes to receive the effect."Destination /usr/local/bin/k3s is not writable", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 53239808,
In my case, ansible also returned a message that is shown below.
I did the same thing - changed K3s version in group_vars/all.yml
- but when I re-run the Playbook I get this error on all my nodes:
"Destination /usr/local/bin/k3s is not writable", "owner": "root", "secontext": "system_u:object_r:bin_t:s0", "size": 53239808,
I did this, and jumped right to the latest 1.25 from 1.22. And that resulted in a lot of weirdness that I'm trying to triage over in https://github.com/k3s-io/k3s/issues/6314. Don't do that.
I'd also like to know how to upgrade a k3s installed with this Ansible code.
Should be a section in the README describing how to do this imo
Now Supported via playbook.