ansible-role-rke2 icon indicating copy to clipboard operation
ansible-role-rke2 copied to clipboard

bug: Unable to provision multiple nodes using Vagrant

Open aubaugh opened this issue 1 year ago • 0 comments

Summary

I have a Vagrantfile that provisions three boxes running AlmaLinux 8 via libvirt, which use the Ansible provisioner to include this role.

I have no agent nodes, thus I'm not tainting the server nodes. I ran into no issues when provisioning a single node cluster, but I run into issues when specifying multiple server nodes in my Ansible inventory.

In High Availability mode: I run into the following error on the task: Create keepalived config file

An exception occurred during task execution. To see the full traceback, use -vvv.
The error was: ansible.errors.AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4'.
'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_default_ipv4'

I was able to get past this issue by changing the below line to {{ hostvars[host].ansible_host }} https://github.com/lablabs/ansible-role-rke2/blob/dc6d4267dd346bb133baf662532bb797e0408270/templates/keepalived.conf.j2#L48

Issue Type

Bug Report

Ansible Version

ansible [core 2.14.1]
  config file = None
  configured module search path = ['/home/austin/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/austin/.local/lib/python3.11/site-packages/ansible
  ansible collection location = /home/austin/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/austin/.local/bin/ansible
  python version = 3.11.1 (main, Dec 11 2022, 15:18:51) [GCC 10.2.1 20201203] (/usr/bin/python3)
  jinja version = 3.1.2
  libyaml = True

Steps to Reproduce

Have libvirt setup and the vagrant-libvirt plugin installed along with Vagrant, Ansible, and this role.

Below are the three files necessary when running vagrant up:

Vagrantfile

NODES = [
    { hostname: "controller1", ip: "192.168.111.2", ram: 4096, cpu: 2 },
    { hostname: "controller2", ip: "192.168.111.3", ram: 4096, cpu: 2 },
    { hostname: "controller3", ip: "192.168.111.4", ram: 4096, cpu: 2 }
]

Vagrant.configure(2) do |config|
  NODES.each do |node|
    config.vm.define node[:hostname] do |config|
      config.vm.hostname = node[:hostname]
      config.vm.box = "almalinux/8"
      config.vm.network :private_network, ip: node[:ip]

      config.vm.provider :libvirt do |domain|
        domain.memory = node[:ram]
        domain.cpus = node[:cpu]
      end

      config.vm.provision :ansible do |ansible|
        ansible.playbook = "playbooks/provision.yml"
        ansible.inventory_path = "inventory/hosts.ini"
      end
    end
  end
end

playbooks/provision.yml

- hosts: all
  become: true
  vars:
    rke2_channel: stable
    rke2_servers_group_name: rke2_servers
    rke2_agents_group_name: rke2_agents
    rke2_ha_mode: true
  roles:
  - lablabs.rke2

inventory/hosts.ini

[rke2_servers]
controller1 ansible_host=192.168.111.2 rke2_type=server
controller2 ansible_host=192.168.111.3 rke2_type=server
controller3 ansible_host=192.168.111.4 rke2_type=server

[rke2_agents]

[k8s_cluster:children]
rke2_servers
rke2_agents

Expected Results

For three server nodes to be provisioned after running vagrant up

Actual Results

All servers fail to provision rke2 Ansible role.

aubaugh avatar Feb 08 '23 22:02 aubaugh