aap_utilities icon indicating copy to clipboard operation
aap_utilities copied to clipboard

[ENH] add code to make all hosts known to each other to avoid issues at deployment time

Open ericzolf opened this issue 3 years ago • 5 comments

Assuming we create the hosts fully automatically, they aren't known to each other, and the setup.sh can't work properly.

ericzolf avatar Dec 06 '21 17:12 ericzolf

sudo ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh is only a partial solution because the setup playbooks also call rsync which, obviously, ignores the environment variable but relies on SSH. Anyway, I'd like a more generic solution which could be reused for other purposes.

ericzolf avatar Dec 06 '21 18:12 ericzolf

is this still an issue?

djdanielsson avatar Sep 30 '22 00:09 djdanielsson

I still think so.

ericzolf avatar Feb 15 '23 09:02 ericzolf

I wonder what this should look like?

Perhaps a role which can optionally be included which sets up /etc/hosts entries for each of the nodes?

Tompage1994 avatar Feb 15 '23 12:02 Tompage1994

I wonder if we should just add the ignore host check to ansible.cfg

djdanielsson avatar May 22 '24 12:05 djdanielsson

As a workaround, I used the below as part of my preflight tasks.

- name: Create 'aap_install_user' for installer to use
  ansible.builtin.user:
    name: "{{ aap_install_user }}"
    comment: "{{ aap_install_user }} orchestrator user"
    home: "/home/{{ aap_install_user }}"
    groups: "wheel"
    password: "{{ aap_install_user_password }}"

- name: Get the aap_install_user's password expiry
  ansible.builtin.shell: >-
    set -o pipefail &&
    chage -l {{ aap_install_user }} | sed -n "2p" | sed "s/.*: //g"
  when: not ansible_check_mode
  register: aap_install_user_expiry
  changed_when: no

- name: Set the aap_install_user password to never expire
  ansible.builtin.command: "chage -M -1 {{ aap_install_user }}"
  when: aap_install_user_expiry.stdout != "never"

- name: Allow passwordless sudo for {{ aap_install_user }}
  ansible.builtin.template:
    src: install_user_sudoers_file.j2
    dest: "/etc/sudoers.d/{{ aap_install_user }}"
    mode: "600"
    owner: root
    group: root

- name: Grab ssh host_key from all nodes
  ansible.builtin.slurp:
    src: /etc/ssh/ssh_host_ecdsa_key.pub
  register: ssh_host_key

- name: Do stuff on the orchestrator_node
  when: orchestrator_node is defined
  block:
    - name: Verify orchestrator_node .ssh directory exists
      ansible.builtin.file:
        path: "/root/.ssh"
        state: directory
        owner: root
        group: root
        mode: "0700"

    - name: Generate a new ssh public private key pair on the orchestrator_node
      community.crypto.openssh_keypair:
        path: /root/.ssh/id_rsa
        type: rsa
        size: 4096
        state: present
        comment: "ansible automation platform installer node"

    - name: Grab ssh public key from control node
      ansible.builtin.slurp:
        src: /root/.ssh/id_rsa.pub
      register: ssh_public_key

    - name: Install sshd public keys for all hosts to install node known_hosts
      ansible.builtin.known_hosts:
        path: /root/.ssh/known_hosts
        name: "{{ item }}"
        key: "{{ item }},{{ hostvars[item].ansible_host }} {{ hostvars[item].ssh_host_key.content | b64decode }}"
        state: present
      loop: "{{ groups.all }}"

- name: Install authorized ssh key for control node on all hosts
  ansible.posix.authorized_key:
    user: "{{ aap_install_user }}"
    state: present
    key: "{{ hostvars[orchestrator_node_host_vars.inventory_hostname].ssh_public_key.content | b64decode }}"
    ```

anderpups avatar Jun 12 '24 13:06 anderpups

We have decided that this is a prerequisite for this collection to work because you need to provide ssh keys already at this point you should have handled host checking in some way.

djdanielsson avatar Jun 24 '24 15:06 djdanielsson