terraform-provisioner-ansible
terraform-provisioner-ansible copied to clipboard
Specifying multiple host groups in one run of Ansible with unique hosts in each group
I want an inventory file that looks like this
[controller]
10.1.1.10
[infra]
10.1.1.11
Because I have a playbook task that looks like this:
---
# loop through each backend supporting the cinder api proxy
# to ensure they are up.
- name: checking cinder api haproxy backend
haproxy:
state: enabled
host: "controller{{ item[0] }}"
backend: cinder_api
wait: yes
wait_interval: 1
delegate_to: "{{ item[1] }}"
with_nested:
- "{{ range(groups['controller'] | length) | list }}"
- "{{ groups['infra'] }}"
Notice the task references both controller
and infra
host group.
So I tried something like this:
resource "openstack_compute_instance_v2" "infra" {
name = "infra"
image_name = "ubuntu-18.04"
network {
name = "public"
}
}
resource "openstack_compute_instance_v2" "controller" {
name = "controller-${random_string.unique.result}"
image_name = "ubuntu-18.04"
network {
name = "public"
}
}
resource "null_resource" "openstack-playbook" {
provisioner "ansible" {
connection {
user = "ubuntu"
private_key = "${var.ssh_private_key}"
}
plays {
groups = ["infra"]
hosts = ["${openstack_compute_instance_v2.infra.access_ip_v4}"]
playbook {
file_path = "../playbook.yml"
}
}
plays {
groups = ["controller"]
hosts = ["${openstack_compute_instance_v2.controller.access_ip_v4}"]
playbook {
file_path = "../playbook.yml"
}
}
}
}
But this fails with Error: Local mode requires a connection with username and host
It appears each provisioner run instance will only connect to a single Ansible host, regardless how many plays.hosts
are specified.
This issue is now almost 2 years old. Does there happen to be any progress in this matter?
My use case is that I need to have a single ansible inventory with all hosts in it because my playbook tasks iterate over all hosts in a specific group to set up a database cluster...