kubespray
kubespray copied to clipboard
No existing session: Bastion being used when not defined under Paramiko
When running Kubespray with Paramiko (needed so I can authenticate with username/password) on Kubespray version 2.19, the very first task errors out with the message No existing session. Running ansible with -vvv shows that Paramiko is trying to proxy the SSH connection through a non-existent bastion host:
<172.25.16.11> CONFIGURE PROXY COMMAND FOR CONNECTION: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -W 172.25.16.11:22 -p {{ hostvars[bastion][ansible_port] | default(22) }} {{ hostvars[bastion][ansible_user] }}@{{ hostvars[bastion][ansible_host] }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
Here's the code that logs this message: https://github.com/ansible/ansible/blob/04c7abcbfe934d218f51894be204f718a17c7e72/lib/ansible/plugins/connection/paramiko_ssh.py#L303
If I comment out this line in Kubespray, the problem goes away: https://github.com/kubernetes-sigs/kubespray/blob/release-2.9/roles/kubespray-defaults/defaults/main.yaml#L4 That means one workaround is to put ansible_ssh_common_args: "" in your host vars.
But it's quite strange that the variable is suddenly being set now even when no bastion is defined. This problem didn't occur on release-2.18 (and using an older ansible version).
Environment:
-
Cloud provider or hardware configuration: on-prem
-
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"): AlmaLinux 8.5 -
Version of Ansible (
ansible --version): 2.12.5 -
Version of Python (
python --version): 3.10.5
Kubespray version (commit) (git rev-parse --short HEAD): cd93d106 (release-2.19 branch)
Full inventory with variables:
all:
hosts:
n1-mac:
ansible_host: 172.25.16.11
ip: 172.25.16.11
access_ip: 172.25.16.11
n2-mac:
ansible_host: 172.25.16.12
ip: 172.25.16.12
access_ip: 172.25.16.12
n3-mac:
ansible_host: 172.25.16.13
ip: 172.25.16.13
access_ip: 172.25.16.13
children:
kube_control_plane:
hosts:
n1-mac:
n2-mac:
n3-mac:
kube_node:
hosts:
n1-mac:
n2-mac:
n3-mac:
etcd:
hosts:
n1-mac:
n2-mac:
n3-mac:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
Command used to invoke ansible:
ansible-playbook -i hosts.yml -vvv -c paramiko --ask-pass upgrade-cluster.yml
Output of ansible run:
...
TASK [kubespray-defaults : Configure defaults] ***********************************************************************************************************************************
task path: /Users/machaffe/code/acis-kubernetes/kubespray/kubespray-work/roles/kubespray-defaults/tasks/main.yaml:2
ok: [n1-mac] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [n2-mac] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
ok: [n3-mac] => {
"msg": "Check roles/kubespray-defaults/defaults/main.yml"
}
[WARNING]: raw module does not support the environment keyword
<172.25.16.11> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: None on PORT 22 TO 172.25.16.11
<172.25.16.11> CONFIGURE PROXY COMMAND FOR CONNECTION: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -W 172.25.16.11:22 -p {{ hostvars[bastion][ansible_port] | default(22) }} {{ hostvars[bastion][ansible_user] }}@{{ hostvars[bastion][ansible_host] }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
[WARNING]: raw module does not support the environment keyword
<172.25.16.12> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: None on PORT 22 TO 172.25.16.12
<172.25.16.12> CONFIGURE PROXY COMMAND FOR CONNECTION: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -W 172.25.16.12:22 -p {{ hostvars[bastion][ansible_port] | default(22) }} {{ hostvars[bastion][ansible_user] }}@{{ hostvars[bastion][ansible_host] }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
[WARNING]: raw module does not support the environment keyword
<172.25.16.13> ESTABLISH PARAMIKO SSH CONNECTION FOR USER: None on PORT 22 TO 172.25.16.13
<172.25.16.13> CONFIGURE PROXY COMMAND FOR CONNECTION: ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -W 172.25.16.13:22 -p {{ hostvars[bastion][ansible_port] | default(22) }} {{ hostvars[bastion][ansible_user] }}@{{ hostvars[bastion][ansible_host] }} {% if ansible_ssh_private_key_file is defined %}-i {{ ansible_ssh_private_key_file }}{% endif %}
TASK [bootstrap-os : Fetch /etc/os-release] **************************************************************************************************************************************
task path: /Users/machaffe/code/acis-kubernetes/kubespray/kubespray-work/roles/bootstrap-os/tasks/main.yml:2
fatal: [n1-mac]: UNREACHABLE! => {
"changed": false,
"msg": "No existing session",
"unreachable": true
}
fatal: [n2-mac]: UNREACHABLE! => {
"changed": false,
"msg": "No existing session",
"unreachable": true
}
fatal: [n3-mac]: UNREACHABLE! => {
"changed": false,
"msg": "No existing session",
"unreachable": true
}
NO MORE HOSTS LEFT ***************************************************************************************************************************************************************
PLAY RECAP ***********************************************************************************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
n1-mac : ok=1 changed=0 unreachable=1 failed=0 skipped=21 rescued=0 ignored=0
n2-mac : ok=1 changed=0 unreachable=1 failed=0 skipped=15 rescued=0 ignored=0
n3-mac : ok=1 changed=0 unreachable=1 failed=0 skipped=15 rescued=0 ignored=0
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.