python icon indicating copy to clipboard operation
python copied to clipboard

[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed when using KUBECONFIG env in config.load_kube_config(config_file=

Open d33psky opened this issue 11 months ago • 4 comments

I do not have permission to reopen #1767 but the issue persists into 2025 and needs a fix or generic workaround.

What happened (please include outputs or screenshots): Setting the config_file variable in the config.load_kube_config arguments to point to a kube config different than the default ~/.kube/config breaks with

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='XXX', port=16443): Max retries exceeded with url: /api/v1/nodes (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)')))

(with host variable replaced by XXX) while kubectl can use the file just fine via KUBECONFIG env var.

What you expected to happen: kubernetes-client/python should be able to use a different config than the default one.

How to reproduce it (as minimally and precisely as possible): have a KUBECONFIG environment variable set to a kube config file different than the default ~/.kube/config file

>>> import kubernetes
>>> import os
>>> os.path.exists(os.environ["KUBECONFIG"])
True
>>> kubernetes.config.load_kube_config(os.environ["KUBECONFIG"])
>>> v1 = kubernetes.client.CoreV1Api()
>>> v1.list_node()

This throws the SSLCertVerificationError error shown above.

Anything else we need to know?: Original ticket is #1767

Environment:

  • Kubernetes version (kubectl version): Client Version: v1.31.5 Kustomize Version: v5.4.2 Server Version: v1.30.8
  • OS (e.g., MacOS 10.13.6): Ubuntu 22.04.5 LTS
  • Python version (python --version) Python 3.12.3
  • Python client version (pip list | grep kubernetes) kubernetes 31.0.0

d33psky avatar Jan 22 '25 17:01 d33psky

I hit this too recently. I think I understand what is going on and I've put a rough PR together that solves it for me. Will submit shortly.

rossigee avatar Jan 26 '25 07:01 rossigee

@d33psky - it seems it can be done like this (no patch necessary)...

    kubernetes.config.load_kube_config(os.environ["KUBECONFIG"])
    configuration = kubernetes.client.Configuration.get_default_copy()
    configuration.ssl_ca_cert = "/etc/ssl/certs/ca-certificates.crt"
    api_client = kubernetes.client.ApiClient(configuration=configuration)
    v1 = kubernetes.client.CoreV1Api(api_client)

rossigee avatar Jan 26 '25 07:01 rossigee

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 26 '25 07:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar May 26 '25 07:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jun 25 '25 08:06 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jun 25 '25 08:06 k8s-ci-robot

Encountered this issue. Root cause seems for me was the one described in https://github.com/kubernetes-client/python/issues/2394 .

My fix was to install pyenv from https://github.com/pyenv/pyenv , then install python 3.12 and then everything worked.

Leaving this here so that doesn't waste two days like I just did.

esantoro avatar Aug 22 '25 14:08 esantoro