kubespray
kubespray copied to clipboard
Kubespray Inventory - hosts.yml not recognized
Hello,
I am using the latest version of kubespray (git cloned yesterday, 2022 Septembre 12th) and I have generated the hosts.yml with the inventory builder (see attached file, the IPs are public and directly reachable from my location, I simply have anonymized them a bit). Unfortunately the hosts.yml is not accepted. When I start the ansible-playbook with:
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml I get the following warning: [WARNING]: Unable to parse /home/stefan/kubespray/inventory/mycluster/hosts.yml as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Questions: 1) Why does it not recognize the host.yml build with the inventory builder? 2) How can I fix this?
The hosts.yml is attached to this posting.
I am currently working with ansible-core 2.12.5 since the script complained about Ansible higher than 2.12 and also refused to recognize my installation of netmark. I tried to make a clean slate, uninstalled all packages installed as root or via apt-get and then let pip3 install the requirements for me in my home directory by using requirements-2.12 and the -r option.
Yours sincerely Stefan hosts.txt
Are you sure your file is accessible ? ansible-inventory is working fine with your file and I can start the cluster playbook without any issue using your hosts.yml file
cat /home/stefan/kubespray/inventory/mycluster/hosts.yaml outputs the file. It belongs to my user (stefan) and has the permissions 644. Which version of ansible-core and ansible are you using? Did you install the requirements via pip3 install -r requirements.txt?
I believe I have the same issue. git pull from just today:
zhengyi at vmbox in kubespray on master via kubespray-venv took 11m 18.8s
➜ git log -1
commit ecd649846a3762555a69bf950dcf0177bd09e15e (HEAD -> master, origin/master, origin/HEAD)
Author: Mohamed Zaian <[email protected]>
Date: Wed Mar 1 00:35:18 2023 +0100
[containerd] add hashes for 1.6.19 (#9838)
I'm in a venv:
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ python -i
Python 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> def get_base_prefix_compat():
... """Get base/real prefix, or sys.prefix if there is none."""
... return getattr(sys, "base_prefix", None) or getattr(sys, "real_prefix", None) or sys.prefix
...
>>> get_base_prefix_compat()
'/usr'
>>> def in_virtualenv():
... return get_base_prefix_compat() != sys.prefix
...
>>> in_virtualenv()
True
>>> quit()
... wherein I have everything installed properly (I believe):
zhengyi at vmbox in ~/cluster/kubespray-venv via kubespray-venv
➜ pip install -r kubespray/requirements.txt Requirement already satisfied: ansible==5.7.1 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 1)) (5.7.1)
Requirement already satisfied: ansible-core==2.12.5 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 2)) (2.12.5) Requirement already satisfied: cryptography==3.4.8 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 3)) (3.4.8) Requirement already satisfied: jinja2==2.11.3 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 4)) (2.11.3) Requirement already satisfied: jmespath==0.9.5 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 5)) (0.9.5) Requirement already satisfied: MarkupSafe==1.1.1 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 6)) (1.1.1) Requirement already satisfied: netaddr==0.7.19 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 7)) (0.7.19)
Requirement already satisfied: pbr==5.4.4 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 8)) (5.4.4) Requirement already satisfied: ruamel.yaml==0.16.10 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 9)) (0.16.10)
Requirement already satisfied: ruamel.yaml.clib==0.2.7 in ./venv/lib64/python3.11/site-packages (from -r kubespray/requirements.txt (line 10)) (0.2.7) Requirement already satisfied: PyYAML in ./venv/lib64/python3.11/site-packages (from ansible-core==2.12.5->-r kubespray/requirements.txt (line 2)) (6.0)
Requirement already satisfied: packaging in ./venv/lib64/python3.11/site-packages (from ansible-core==2.12.5->-r kubespray/requirements.txt (line 2)) (23.0) Requirement already satisfied: resolvelib<0.6.0,>=0.5.3 in ./venv/lib64/python3.11/site-packages (from ansible-core==2.12.5->-r kubespray/requirements.txt (line 2)) (0.5.4) Requirement already satisfied: cffi>=1.12 in ./venv/lib64/python3.11/site-packages (from cryptography==3.4.8->-r kubespray/requirements.txt (line 3)) (1.15.1) Requirement already satisfied: pycparser in ./venv/lib64/python3.11/site-packages (from cffi>=1.12->cryptography==3.4.8->-r kubespray/requirements.txt (line 3)) (2.21)
[notice] A new release of pip is available: 23.0 -> 23.0.1 [notice] To update, run: pip install --upgrade pip
Here's how I build my inventory:
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ declare -a IPS=(192.168.1.116 192.168.1.118 192.168.1.119 192.168.1.115 192.168.1.117 192.168.1.114)
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ which python3
/home/zhengyi/cluster/kubespray-venv/venv/bin/python3
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ CONFIG_FILE=inventory/mycluster/hosts2.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all DEBUG: Adding group kube_control_plane
DEBUG: Adding group kube_node
DEBUG: Adding group etcd DEBUG: Adding group k8s_cluster DEBUG: Adding group calico_rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node4 to group all
DEBUG: adding host node5 to group all
DEBUG: adding host node6 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube_control_plane DEBUG: adding host node2 to group kube_control_plane
DEBUG: adding host node1 to group kube_node DEBUG: adding host node2 to group kube_node
DEBUG: adding host node3 to group kube_node
DEBUG: adding host node4 to group kube_node
DEBUG: adding host node5 to group kube_node
DEBUG: adding host node6 to group kube_node
... and here's ansible-inventory loudly refusing to parse it:
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ ansible-inventory -i inventory/mycluster/hosts2.yaml -vvv --list
ansible-inventory [core 2.12.5]
config file = /home/zhengyi/cluster/kubespray-venv/kubespray/ansible.cfg
configured module search path = ['/home/zhengyi/cluster/kubespray-venv/kubespray/library']
ansible python module location = /home/zhengyi/cluster/kubespray-venv/venv/lib64/python3.11/site-packages/ansible
ansible collection location = /home/zhengyi/.ansible/collections:/usr/share/ansible/collections
executable location = /home/zhengyi/cluster/kubespray-venv/venv/bin/ansible-inventory
python version = 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)]
jinja version = 2.11.3
libyaml = True
Using /home/zhengyi/cluster/kubespray-venv/kubespray/ansible.cfg as config file
host_list declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
auto declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as it did not pass its verify_file() method
[WARNING]: Unable to parse /home/zhengyi/cluster/kubespray-venv/kubespray/inventory/mycluster/hosts2.yaml as an inventory
source
[WARNING]: No inventory was parsed, only implicit localhost is available
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
}
}
That file looks pretty readable to me:
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ ls -al inventory/mycluster
total 12
drwxr-xr-x. 4 zhengyi zhengyi 95 Mar 1 13:03 .
drwxr-xr-x. 5 zhengyi zhengyi 50 Jan 30 16:29 ..
drwxr-xr-x. 4 zhengyi zhengyi 52 Jan 30 16:27 group_vars
-rw-r--r--. 1 zhengyi zhengyi 990 Mar 1 13:03 hosts2.yml
-rw-r--r--. 1 zhengyi zhengyi 1095 Mar 1 13:02 hosts.yml
-rw-r--r--. 1 zhengyi zhengyi 1028 Jan 30 16:27 inventory.ini
drwxr-xr-x. 2 zhengyi zhengyi 81 Jan 30 16:27 patches
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ cat inventory/mycluster/hosts2.yml
all:
hosts:
node1:
ansible_host: 192.168.1.116
ip: 192.168.1.116
access_ip: 192.168.1.116
node2:
ansible_host: 192.168.1.118
ip: 192.168.1.118
access_ip: 192.168.1.118
node3:
ansible_host: 192.168.1.119
ip: 192.168.1.119
access_ip: 192.168.1.119
node4:
ansible_host: 192.168.1.115
ip: 192.168.1.115
access_ip: 192.168.1.115
node5:
ansible_host: 192.168.1.117
ip: 192.168.1.117
access_ip: 192.168.1.117
node6:
ansible_host: 192.168.1.114
ip: 192.168.1.114
access_ip: 192.168.1.114
children:
kube_control_plane:
hosts:
node1:
node2:
kube_node:
hosts:
node1:
node2:
node3:
node4:
node5:
node6:
etcd:
hosts:
node1:
node2:
node3:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
Weirdly, it's a perfectly good inventory as far as ansible is concerned:
zhengyi at vmbox in kubespray on master via kubespray-venv
➜ ansible -i inventory/mycluster/hosts2.yml -m ping etcd -e ansible_user=fedora
[WARNING]: Skipping callback plugin 'ara_default', unable to load
node2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node3 | SUCCESS => {
"changed": false,
"ping": "pong"
}
node1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.