client ssl handshake is only working with Mozilla's standard root certificates from certifi package. No custom root certificates possible.
What happened (please include outputs or screenshots): urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.a1.cp.cna.at', port=6443): Max retries exceeded with url: /apis/authentication.k8s.io/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)'))) python-BaseException What you expected to happen: Missing implementation: configuration.ssl_ca_cert is always set to None!!!!! There is no way where you can pass a custom ssl_ca_cert path
How to reproduce it (as minimally and precisely as possible):
use on premise kubernetes with your own local issuer certificate
Anything else we need to know?:
root cause of the problem is in rest.RESTClientObject.init
# ca_certs
if configuration.ssl_ca_cert:
# TODO: not implemented configuration.ssl_ca_cert is always set to None!!!!!
ca_certs = configuration.ssl_ca_cert
else:
# quick fix could be: use environment variable which is used in python requests (urllib)
import os
ca_certs = os.environ.get("REQUESTS_CA_BUNDLE")
if ca_certs is None:
# if not set certificate file, use Mozilla's root certificates.
ca_certs = certifi.where()
Environment:
-
Kubernetes version (
kubectl version): oc version Client Version: 4.15.0-202411060036.p0.g8231637.assembly.stream.el8-8231637 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Kubernetes Version: v1.29.11+148a389 -
OS (e.g., MacOS 10.13.6): MacOS 15.3.1
-
Python version (
python --version) >3.11 -
Python client version (
pip list | grep kubernetes) kubernetes 32.0.0
just found issue #1131. it is the same problem.
/assign @palnabarun
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.