aap_utilities
                                
                                 aap_utilities copied to clipboard
                                
                                    aap_utilities copied to clipboard
                            
                            
                            
                        Allow the option to simply ensure/check if namespace is created
Summary
The code in the initialization.yml task file attempts to create the namespace, but not all OpenShift configurations are setup where the team/user can do anything. In my current situation, the user does not have the ability to create namespaces and this permission is granted to OCP admins and not all users.
As a result, the following task fails but it should fail if the namespace does not exist, but succeed if already existing. Note that, I believe the reason it fails is because the template yaml only contains the name parameter for the Namespace, whereas the existing namespace that was created by the OCP Admins already, has a bunch of properties set on that. So the "Desired state" mechanism of Ansible is probably trying to replace that object with the simpler one defined by the template. This is another reason I want the ability to just check if a namespace exists with the same name - and not attempt to create one in OCP. This is only to ensure the pre-req exists (namespace).
https://github.com/redhat-cop/aap_utilities/blob/ecf367a30a7f3884a32c2241aae980a7a893511b/roles/aap_ocp_install/tasks/initialization.yml#L11-L21
Issue Type
- Bug Report
Ansible, Collection, Docker/Podman details
ansible --version
ansible-galaxy collection list
podman --version
- ansible installation method: one of source, pip, OS package, EE
OS / ENVIRONMENT
Desired Behavior
Actual Behavior
Please give some details of what is actually happening. Include a minimum complete verifiable example with:
- playbook / task
- configuration file / list
- error
STEPS TO REPRODUCE
It appears the problem was actually that the properties were all different in the target OCP. This is what I ended up doing outside of the role - which uses the newer format of parameters:
  - name: Ensure namespace exists
    kubernetes.core.k8s:
      host: "{{ __aap_ocp_install_auth_results['openshift_auth']['host'] }}"
      api_key: "{{ __aap_ocp_install_auth_results['openshift_auth']['api_key'] }}"
      validate_certs: "{{ aap_ocp_install_ocp_connection['validate_certs'] | default(omit) }}"
      state: present
      name: "{{ aap_ocp_install_namespace }}"
      api_version: project.openshift.io/v1
      kind: Project
It appears the objects in general are created using templates that don't seem to match what our documentation suggests to use and parameters to set. See our docs on installing using CLI which describes the metadata:
https://access.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-aap-operator-cli
I'm hitting similar problems with creating the Operator now.
So, it appears the collection needs to support a customer specific implementation. In this case, customer uses Project instead of Namespace in order to apply policy, etc.
I know this is Brian's brainchild, and he has assigned it to himself, so I trust it will be taken care of, I'd be in contact with him about design changes.
It appears the problem was actually that the properties were all different in the target OCP. This is what I ended up doing outside of the role - which uses the newer format of parameters:
- name: Ensure namespace exists kubernetes.core.k8s: host: "{{ __aap_ocp_install_auth_results['openshift_auth']['host'] }}" api_key: "{{ __aap_ocp_install_auth_results['openshift_auth']['api_key'] }}" validate_certs: "{{ aap_ocp_install_ocp_connection['validate_certs'] | default(omit) }}" state: present name: "{{ aap_ocp_install_namespace }}" api_version: project.openshift.io/v1 kind: Project
This isn't a newer format of parameters, but a difference between OCP and plain Kubernetes. Kubernetes implements the concept of namespaces, but OpenShift adds Projects on top of that. A project is just a Kubernetes namespace with additional annotations. When the Project API is used it creates a namespace object.
So in your case although the customer created the namespace via a project there is still a namespace object.
Do you know what permissions the user has? Are users prohibited from creating projects and only admins create the projects?
I think it was mainly a problem of trying to create "Namespace" kind, instead of "Project" kind. If you want your repo to support both OCP and Kubernetes, you'd have to allow creating either Project or Namespace, but not just Namespace.
we don't support deploying AAP on anything outside of OCP and this repo is specifically for AAP not upstream so we will not support anything outside of OCP and VMs unless that stance changes from the AAP side.
OCP supports creating namespaces either via the Namespace API or the Projects API. I know it is possible for OCP to limit who is able to create projects (via the self-provisioner cluster role).
The AAP documentation that @ansiblejunky linked in https://github.com/redhat-cop/aap_utilities/issues/158#issuecomment-1522166906 is misleading as the document states to use oc new-project ... in the first step and then in step 3 creates a yaml file that defines (among other objects) a Namespace object which is redundant to step 1.
As @djdanielsson said, since we only support AAP on OCP (and not AWX or vanilla Kubernetes) with this automation I'll update the deployment to use the Projects API instead of the Namespace API.
@ansiblejunky it would be great if you could find out if your customer removed the self-provisioner cluster role from the system:authenticated:oauth group and assigns it to specific group(s) (e.g. admins) or if there is some other RBAC configuration in place.