When provisioning a docker container, the HostAlias/host_alias is kept to 'default' instead of being set to the container ID
Overview of the Issue
I have a packer file containing the following:
[....]
source "docker" "Test" {
image = "centos:7"
export_path = "test.tar"
}
[....]
build {
sources = ["source.docker.Test"]
provisioner "shell" {
inline = ["echo 'proxy=http://<proxy_url>' >> /etc/yum.conf", "rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7", "yum install -y python3"]
}
provisioner "ansible" {
extra_arguments = [
"-e", "proxy_url='http://<proxy_url>'",
"-e", "ansible_connection=docker"
]
playbook_file = "playbooks/GoldenImage.yml"
user = "root"
}
}
Ansible fails the first time that tries to connect to the docker container (when doing the initial host scan, called "facts gathering" in Ansible). And I suspect is because the contents of the inventory file are like
default ansible_host=127.0.0.1 ansible_user=root ansible_port=42779
I think, although I am not sure, that 'default' should actually be the ID of the docker container. I have checked in hashicorp/packer-plugin-ansible/provisioner/ansible/provisioner.go, lines 265-267 and 'default' is the value of HostAlias if nothing else is set. I guess setting the host_alias parameter to the container ID in my packer section would be enough, but I do not see how can I get the container ID from the docker builder in packer. Is this a bug? or a configuration mistake on my side?
UPDATE: If I set the host_alias parameter in the packer config file, the inventory file gets updated as expected. However, I do not understand how can I access the state variable instance_id (called like this on the packer-plugin-docker, to be used by the provisioners) from the config file.
Plugin and Packer version
packer version: 1.7.4 I do not know how to get the plugin versions
As a workaround fix to this issue, we are executing setupdb.sh script again in our multicloud bootstrap automation code & it's creating the required tablespaces & MAS+Manage stack with internal DB2 is creating successfully.
oc exec -n db2u c-db2wh-db01-db2u-0 -- su -lc '/tmp/setupdb.sh | tee /tmp/setupdb2.log' db2inst1
I have following values set for different environment parameter shown below for verifying this MAS Core+Manage (Internal DB2) scenario):
MAS_CHANNEL=8.11.x MAS_DEVOPS_COLLECTION_VERSION=18.3.4 MAS_CATALOG_VERSION=v8-231004-amd64 MAS_APP_CHANNEL=8.7.x
Seeing this issue on ROKS cluster (VPC Gen2 infrastructure) while I am trying to install Manage using MAS Client approach for UnifyBlue project. Issue is recurring and I am running setupdb.sh script manually to workaround this issue. Please find the above log and screenshot file. @durera - FYI