python
python copied to clipboard
[SSL: CA_KEY_TOO_SMALL]
hi, my friends.
Thanks for your service. I got an error when run script. could you help me?
system
macOS 12.4
python version
Python 3.10.4
code
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
error
/Users/mars/env/abc/bin/python /Users/mars/weiyun/dev/PycharmProjects/abc/servicemonitor/get_servicemonitor.py
Listing pods with their IPs:
Traceback (most recent call last):
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn
conn.connect()
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connection.py", line 414, in connect
self.sock = ssl_wrap_socket(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 418, in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
ssl.SSLError: [SSL: CA_KEY_TOO_SMALL] ca key too small (_ssl.c:3874)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/mars/weiyun/dev/PycharmProjects/abc/servicemonitor/get_servicemonitor.py", line 8, in <module>
ret = v1.list_pod_for_all_namespaces(watch=False)
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/api/core_v1_api.py", line 16864, in list_pod_for_all_namespaces
return self.list_pod_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/api/core_v1_api.py", line 16967, in list_pod_for_all_namespaces_with_http_info
return self.api_client.call_api(
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/rest.py", line 239, in GET
return self.request("GET", url,
File "/Users/mars/env/abc/lib/python3.10/site-packages/kubernetes/client/rest.py", line 212, in request
r = self.pool_manager.request(method, url,
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/request.py", line 74, in request
return self.request_encode_url(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 813, in urlopen
return self.urlopen(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 813, in urlopen
return self.urlopen(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 813, in urlopen
return self.urlopen(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/connectionpool.py", line 785, in urlopen
retries = retries.increment(
File "/Users/mars/env/abc/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='x.x.x.x', port=6443): Max retries exceeded with url: /api/v1/pods?watch=False (Caused by SSLError(SSLError(397, '[SSL: CA_KEY_TOO_SMALL] ca key too small (_ssl.c:3874)')))
Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
My kubeconfig in ~/.kube/config works with kubectl but having the same issue mentioned above.
kubernetes sdk is on 24.2.0
client-certificate-data has 2 certs. (Provider: Alicloud ACK)
I'm using aliyun k8s and encountered the same issue as well.
It turns out that there're multiple CA with base64 encoded in client-certificate-data in kubeconfig file. Decode the client-certificate-data string and one of the CA is 1024 bits.
The openssl in our image is configured with SECLEVEL=2, which rejects CA with bits less than 2048.
So either get a kubeconfig with 2048 bits CA, or set SECLEVEL=1 in openssl.cnf if you're ok with the lower security bar.
You can find location of openssl.cnf by openssl version -d
. For example, modify the config from CipherString = DEFAULT@SECLEVEL=2
to CipherString = DEFAULT@SECLEVEL=1
@juiceyang Thanks, changing the cnf file did not work for me, however when I added the following lines to my script, my script worked.
import urllib3
urllib3.util.ssl_.DEFAULT_CIPHERS = "ALL:@SECLEVEL=1"
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@juiceyang Thanks, changing the cnf file did not work for me, however when I added the following lines to my script, my script worked.
import urllib3 urllib3.util.ssl_.DEFAULT_CIPHERS = "ALL:@SECLEVEL=1"
This works for regular requests except for WebSockets, because WebSockets doesn't use urllib3
. So, you will still have this issue for kubectl exec
. I met this issue for Aliyun Kubernetes.