kubespray icon indicating copy to clipboard operation
kubespray copied to clipboard

crypt lib missing in python requirements, causing failure on debian13 when installing with ArgoCD add-on

Open haklein opened this issue 4 months ago • 5 comments

What happened?

When running kubespray on debian 13 and deploying ArgoCD, it lacks python dependencies for cryptography:

TASK [kubernetes-apps/argocd : Kubernetes Apps | Install ArgoCD] ***************
ok: [k8s-cluster-master-1] => (item={'name': 'namespace', 'file': 'argocd-namespace.yml'})
ok: [k8s-cluster-master-1] => (item={'name': 'install', 'file': 'argocd-install.yml', 'namespace': 'argocd', 'url': 'https://raw.githubusercontent.com/argoproj/argo-cd/v2.11.0/manifests/install.yaml'})
fatal: [k8s-cluster-master-1]: FAILED! => 
  msg: Unable to encrypt nor hash, either crypt or passlib must be installed.. No module named 'crypt'. Unable to encrypt nor hash, either crypt or passlib must be installed.. No module named 'crypt'

TASK [kubernetes-apps/argocd : Kubernetes Apps | Set ArgoCD custom admin password] ***

Worked around by pip installing passlib[bcrypt] in the kubespray venv.

What did you expect to happen?

requirements.txt should include required crypto libraries for the ArgoCD addon.

How can we reproduce it (as minimally and precisely as possible)?

Run kubespray on debian 13 and deploy with enabled ArgoCD add-on

OS

Other|Unsupported

Version of Ansible

ansible [core 2.16.14] config file = None configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /root/.kubitect/share/venv/kubespray/v2.26.0/lib/python3.13/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = ./.kubitect/share/venv/kubespray/v2.26.0/bin/ansible python version = 3.13.5 (main, Jun 25 2025, 18:55:22) [GCC 14.2.0] (/root/.kubitect/share/venv/kubespray/v2.26.0/bin/python) jinja version = 3.1.6 libyaml = True

Version of Python

Python 3.13.5

Version of Kubespray (commit)

v2.26.0

Network plugin used

calico

Full inventory with variables

node1 | SUCCESS => { "hostvars[inventory_hostname]": { "allow_unsupported_distribution_setup": false, "ansible_check_mode": false, "ansible_config_file": null, "ansible_connection": "local", "ansible_diff_mode": false, "ansible_facts": {}, "ansible_forks": 5, "ansible_inventory_sources": [ "/root/.kubitect/clusters/k8s-cluster/ansible/kubespray/inventory/local/hosts.ini" ], "ansible_playbook_python": "/root/.kubitect/share/venv/kubespray/v2.26.0/bin/python", "ansible_verbosity": 0, "ansible_version": { "full": "2.16.14", "major": 2, "minor": 16, "revision": 14, "string": "2.16.14" }, "argocd_enabled": false, "auto_renew_certificates": false, "bin_dir": "/usr/local/bin", "calico_cni_name": "k8s-pod-network", "calico_pool_blocksize": 26, "cephfs_provisioner_enabled": false, "cert_manager_enabled": false, "cilium_l2announcements": false, "cluster_name": "cluster.local", "container_manager": "containerd", "coredns_k8s_external_zone": "k8s_external.local", "credentials_dir": "/root/.kubitect/clusters/k8s-cluster/ansible/kubespray/inventory/local/credentials", "default_kubelet_config_dir": "/etc/kubernetes/dynamic_kubelet_dir", "deploy_netchecker": false, "dns_domain": "cluster.local", "dns_mode": "coredns", "docker_bin_dir": "/usr/bin", "docker_container_storage_setup": false, "docker_daemon_graph": "/var/lib/docker", "docker_dns_servers_strict": false, "docker_iptables_enabled": "false", "docker_log_opts": "--log-opt max-size=50m --log-opt max-file=5", "docker_rpm_keepcache": 1, "enable_coredns_k8s_endpoint_pod_names": false, "enable_coredns_k8s_external": false, "enable_dual_stack_networks": false, "enable_nat_default_gateway": true, "enable_nodelocaldns": true, "enable_nodelocaldns_secondary": false, "etcd_data_dir": "/var/lib/etcd", "etcd_deployment_type": "host", "event_ttl_duration": "1h0m0s", "gateway_api_enabled": false, "group_names": [ "etcd", "k8s_cluster", "kube_control_plane", "kube_node" ], "groups": { "all": [ "node1" ], "etcd": [ "node1" ], "k8s_cluster": [ "node1" ], "kube_control_plane": [ "node1" ], "kube_node": [ "node1" ], "ungrouped": [] }, "helm_enabled": false, "ingress_alb_enabled": false, "ingress_nginx_enabled": false, "ingress_publish_status_address": "", "inventory_dir": "/root/.kubitect/clusters/k8s-cluster/ansible/kubespray/inventory/local", "inventory_file": "/root/.kubitect/clusters/k8s-cluster/ansible/kubespray/inventory/local/hosts.ini", "inventory_hostname": "node1", "inventory_hostname_short": "node1", "k8s_image_pull_policy": "IfNotPresent", "kata_containers_enabled": false, "krew_enabled": false, "krew_root_dir": "/usr/local/krew", "kube_api_anonymous_auth": true, "kube_apiserver_ip": "10.233.0.1", "kube_apiserver_port": 6443, "kube_cert_dir": "/etc/kubernetes/ssl", "kube_cert_group": "kube-cert", "kube_config_dir": "/etc/kubernetes", "kube_encrypt_secret_data": false, "kube_log_level": 2, "kube_manifest_dir": "/etc/kubernetes/manifests", "kube_network_node_prefix": 24, "kube_network_node_prefix_ipv6": 120, "kube_network_plugin": "calico", "kube_network_plugin_multus": false, "kube_ovn_default_gateway_check": true, "kube_ovn_default_logical_gateway": false, "kube_ovn_default_vlan_id": 100, "kube_ovn_dpdk_enabled": false, "kube_ovn_enable_external_vpc": true, "kube_ovn_enable_lb": true, "kube_ovn_enable_np": true, "kube_ovn_enable_ssl": false, "kube_ovn_encap_checksum": true, "kube_ovn_external_address": "8.8.8.8", "kube_ovn_external_address_ipv6": "2400:3200::1", "kube_ovn_external_dns": "alauda.cn", "kube_ovn_hw_offload": false, "kube_ovn_ic_autoroute": true, "kube_ovn_ic_dbhost": "127.0.0.1", "kube_ovn_ic_enable": false, "kube_ovn_ic_zone": "kubernetes", "kube_ovn_network_type": "geneve", "kube_ovn_node_switch_cidr": "100.64.0.0/16", "kube_ovn_node_switch_cidr_ipv6": "fd00:100:64::/64", "kube_ovn_pod_nic_type": "veth_pair", "kube_ovn_traffic_mirror": false, "kube_ovn_tunnel_type": "geneve", "kube_ovn_vlan_name": "product", "kube_owner": "kube", "kube_pods_subnet": "10.233.64.0/18", "kube_pods_subnet_ipv6": "fd85:ee78:d8a6:8607::1:0000/112", "kube_proxy_mode": "ipvs", "kube_proxy_nodeport_addresses": [], "kube_proxy_strict_arp": false, "kube_script_dir": "/usr/local/bin/kubernetes-scripts", "kube_service_addresses": "10.233.0.0/18", "kube_service_addresses_ipv6": "fd85:ee78:d8a6:8607::1000/116", "kube_token_dir": "/etc/kubernetes/tokens", "kube_version": "v1.30.4", "kube_vip_enabled": false, "kube_webhook_token_auth": false, "kube_webhook_token_auth_url_skip_tls_verify": false, "kubeadm_certificate_key": "8f73a44ba7bc57fa3e35f1fe68279fafea33bbbb6b3407426af1c70ca1a4051c", "kubeadm_patches": { "dest_dir": "/etc/kubernetes/patches", "enabled": false, "source_dir": "/root/.kubitect/clusters/k8s-cluster/ansible/kubespray/inventory/local/patches" }, "kubernetes_audit": false, "loadbalancer_apiserver_healthcheck_port": 8081, "loadbalancer_apiserver_port": 6443, "local_path_provisioner_enabled": false, "local_release_dir": "{{ansible_env.HOME}}/releases", "local_volume_provisioner_enabled": false, "macvlan_interface": "eth1", "metallb_enabled": false, "metallb_namespace": "metallb-system", "metallb_speaker_enabled": false, "metrics_server_enabled": false, "ndots": 2, "no_proxy_exclude_workers": false, "node_feature_discovery_enabled": false, "nodelocaldns_bind_metrics_host_ip": false, "nodelocaldns_health_port": 9254, "nodelocaldns_ip": "169.254.25.10", "nodelocaldns_second_health_port": 9256, "nodelocaldns_secondary_skew_seconds": 5, "ntp_enabled": false, "ntp_manage_config": false, "ntp_servers": [ "0.pool.ntp.org iburst", "1.pool.ntp.org iburst", "2.pool.ntp.org iburst", "3.pool.ntp.org iburst" ], "omit": "__omit_place_holder__0004b3c187bbd84c8f90216f356372afdff83f46", "persistent_volumes_enabled": false, "playbook_dir": "/root", "rbd_provisioner_enabled": false, "registry_enabled": false, "remove_anonymous_access": false, "resolvconf_mode": "host_resolvconf", "retry_stagger": 5, "skydns_server": "10.233.0.3", "skydns_server_secondary": "10.233.0.4", "unsafe_show_logs": false, "volume_cross_zone_attachment": false } }

Command used to invoke ansible

ran via kubitect

Output of ansible run

TASK [kubernetes-apps/argocd : Kubernetes Apps | Install ArgoCD] *************** ok: [k8s-cluster-master-1] => (item={'name': 'namespace', 'file': 'argocd-namespace.yml'}) ok: [k8s-cluster-master-1] => (item={'name': 'install', 'file': 'argocd-install.yml', 'namespace': 'argocd', 'url': 'https://raw.githubusercontent.com/argoproj/argo-cd/v2.11.0/manifests/install.yaml'}) fatal: [k8s-cluster-master-1]: FAILED! => msg: Unable to encrypt nor hash, either crypt or passlib must be installed.. No module named 'crypt'. Unable to encrypt nor hash, either crypt or passlib must be installed.. No module named 'crypt'

TASK [kubernetes-apps/argocd : Kubernetes Apps | Set ArgoCD custom admin password] ***

Anything else we need to know

No response

haklein avatar Aug 20 '25 08:08 haklein

Is this happen on master branch?

tico88612 avatar Aug 20 '25 09:08 tico88612

Is this happen on master branch?

This is showing regardless of the branch. Did encounter it on master and on 2.26.0. Showed up as soon as I've upgraded the hardware node from debian 12 to debian 13. Confirmed on two different machines. So likely related to a python dependency change across debian 12 default python and debian 13 default python.

haklein avatar Aug 20 '25 09:08 haklein

This is showing regardless of the branch. Did encounter it on master and on 2.26.0. Showed up as soon as I've upgraded the hardware node from debian 12 to debian 13. Confirmed on two different machines. So likely related to a python dependency change across debian 12 default python and debian 13 default python.

So, Is that mean only happen on Debian 13?

tico88612 avatar Aug 20 '25 09:08 tico88612

So, Is that mean only happen on Debian 13?

yes, this only happens when kubespray playbook runs on a debian13 host

haklein avatar Aug 20 '25 11:08 haklein

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 18 '25 12:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Dec 18 '25 12:12 k8s-triage-robot