awx-resource-operator
awx-resource-operator copied to clipboard
Job&Pods created by the Resource Operator from an AnsibleJob CR need to be able to specify extra volume and envs mounts
There are two different ways that the AAP2 Controller instance treats spawned Ansible Jobs - the properly configured way via Job Template execution from the AAP2 Controller API/WebUI, and the incomplete way that is done as part of this template: https://github.com/ansible/awx-resource-operator/blob/acdf5bb1a1994fb534418bcc9fe2fd9b629a8c07/roles/job/templates/job_definition.yml.j2
Use Case - Adding the Cluster Root CA Bundle and Proxy Configuration to the AnsibleJob started Job>Pods
For example, in the AAP2 Controller Web UI, in the Settings > Job Settings page, you can enable "Expose host paths for Container Groups" and set "Paths to expose to isolated jobs" to ["/etc/pki/ca-trust:/etc/pki/ca-trust:O"]
to pass the Trusted Root CA bundles from the hosts (as set by the Proxy config) into the ephemeral Job container that runs in the namespace the operator is installed into. Alternatively, you can set that path via a custom Pod specification as a volumeMount in the default
Instance Group - this works splendidly.
Challenge
However, whenever an AnsibleJob CR is created on the cluster, the AAP2 Operator's Resource Operator Controller does not consume these values. In fact, the template of the Job definition, and thus Pods created by the Resource Operator Controller, starts Ansible Jobs not with any defined Execution Environment, and does not consume the same settings as set in the AAP2 Controller Job Settings.
Workaround
To work around this issue, you need to build a custom container based off the registry.redhat.io/ansible-automation-platform-22/platform-resource-runner-rhel8 image, add your CA root certificates to it, RUN an update-ca-trust, build/push the image to a registry, then use that for the .spec.runner_image
+ .spec.runner_version
in the AnsibleJob CR.
It looks like the Job Runner Role Task uses the awx.awx.tower_job_launch
module which is actually a bit limited in what it passes to the instantiated Job so the module would need to be updated, or the Resource Operator needs to be able to compensate for additional container specifications inherited from the cluster/settings.
Potential Solutions
What if the Resource Operator consumed the Root CA Certificate ConfigMap as many other RH Operators do, and then apply to templated Job CRs? The process would be something similar to the following:
- Query for
.spec.trustedCA.name
, if set then add a ConfigMap with theconfig.openshift.io/inject-trusted-cabundle: 'true'
'metadata.label
- Attach that ConfigMap to the Job CR templated by the Job Role in the Resource Operator
- Repeat for the other configuration such as Proxy > Container Envs
Alternatively, the Resource Operator could query for the AAP2 Controller Job Configuration Settings and apply those alternatively - or another configuration area could be presented for Runner Configuration Settings.
Thoughts?