python
python copied to clipboard
HTTP response body: 404 page not found: when trying to create cert-manager `ClusterIssuer` with `client.CustomObjectsApi()`
What happened (please include outputs or screenshots):
I get HTTP response body: 404 page not found
when I try to create a ClusterIssuer
for cert-manager. Here's the full stack trace:
File "/Users/jesse/repos/smol-k8s-lab/smol_k8s_lab/k8s_apps/base_install.py", line 33, in install_base_apps
configure_cert_manager(k8s_obj, email)
File "/Users/jesse/repos/smol-k8s-lab/smol_k8s_lab/k8s_apps/cert_manager.py", line 50, in configure_cert_manager
k8s_obj.create_from_manifest_dict('cert-manager.io', 'v1', 'ingress',
File "/Users/jesse/repos/smol-k8s-lab/smol_k8s_lab/k8s_tools/k8s_lib.py", line 46, in create_from_manifest_dict
self.custom_obj_api.create_namespaced_custom_object(
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py", line 225, in create_namespaced_custom_object
return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py", line 344, in create_namespaced_custom_object_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 391, in request
return self.rest_client.POST(url,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/rest.py", line 276, in POST
return self.request("POST", url,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jesse/Library/Caches/pypoetry/virtualenvs/smol-k8s-lab-ckSmZNbs-py3.11/lib/python3.11/site-packages/kubernetes/client/rest.py", line 235, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Audit-Id': '5b282002-3203-4546-ad9b-c69f2137e71b', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '24a543f5-d37f-4a78-9f8f-c677cf2ca16f', 'X-Kubernetes-Pf-Prioritylevel-Uid': '26184213-a94f-4697-aa31-b38023412c50', 'Date': 'Tue, 15 Aug 2023 18:27:20 GMT', 'Content-Length': '19'})
HTTP response body: 404 page not found
What you expected to happen: I expected a ClusterIssuer
manifest to be created in the ingress namespace.
How to reproduce it (as minimally and precisely as possible): Create a cluster with kind and helm install cert-manager:
kind create cluster
helm repo add jetstack https://charts.jetstack.io
helm upgrade cert-manager jetstack/cert-manager --install -n ingress --create-namespace --version 1.12.3 --set
installCRDs=true --wait --wait-for-jobs
Verify all the CRDs were created, manually:
$ kubectl get crd | grep cert
certificaterequests.cert-manager.io 2023-08-15T18:11:11Z
certificates.cert-manager.io 2023-08-15T18:11:11Z
challenges.acme.cert-manager.io 2023-08-15T18:11:11Z
clusterissuers.cert-manager.io 2023-08-15T18:11:11Z
issuers.cert-manager.io 2023-08-15T18:11:11Z
orders.acme.cert-manager.io 2023-08-15T18:11:11Z
This is happening in a small program I'm writing called smol-k8s-lab, but I've extracted just the part that is failing. I mostly grabbed this code from the example in examples/namespaced_custom_object.py:
from kubernetes import client, config, utils
from kubernetes.client.rest import ApiException
class K8s():
"""
Python class for the kubernetes python cli
"""
def __init__(self,):
"""
This is mostly for storing the k8s config
"""
config.load_kube_config()
self.api_client = client.ApiClient()
self.api_instance = client.CoreV1Api(self.api_client)
self.custom_obj_api = client.CustomObjectsApi()
def create_from_manifest_dict(self,
api_group: str = "",
api_version: str = "",
namespace: str = "",
plural_obj_name: str = "",
manifest_dict: dict = {}) -> bool:
"""
creates any resource in k8s from a python dictionary
https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md#create_namespaced_custom_object
"""
# create the resource
self.custom_obj_api.create_namespaced_custom_object(
group=api_group,
version=api_version,
namespace=namespace,
plural=plural_obj_name,
body=manifest_dict,
)
return True
def configure_cert_manager(k8s_obj: K8s, email_addr: str = "") -> True:
"""
Installs letsencrypt clusterIssuers for both staging and production
"""
# we create a ClusterIssuer for both staging and prod
acme_staging = '"https://acme-staging-v02.api.letsencrypt.org/directory"'
for issuer in ['letsencrypt-staging', 'letsencrypt-prod']:
if issuer == "letsencrypt-prod":
acme_staging = acme_staging.replace("staging-", "")
issuers_dict = {
'apiVersion': "cert-manager.io/v1",
'kind': 'ClusterIssuer',
'metadata': {'name': issuer},
'spec': {
'acme': {'email': email_addr,
'server': acme_staging,
'privateKeySecretRef': {
'name': '"letsencrypt-staging"'
},
'solvers': [
{'http01': {'ingress': {'class': 'nginx'}}}
]
}
}
}
k8s_obj.create_from_manifest_dict('cert-manager.io', 'v1', 'ingress',
'clusterissuers', issuers_dict)
configure_cert_manager(K8s(), "[email protected]")
Example manifest I'd like to apply as regular yaml:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version
):
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.4", GitCommit:"fa3d7990104d7c1f16943a67f11b154b71f6a132", GitTreeState:"clean", BuildDate:"2023-07-19T12:14:48Z", GoVersion:"go1.20.6", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-15T00:38:14Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"linux/arm64"}
- OS (e.g., MacOS 10.13.6):
Ventura 13.5
- Python version (
python --version
):Python 3.11.4
- Python client version (
pip list | grep kubernetes
)kubernetes 27.2.0
Let me know if there's something I'm doing incorrectly here.
Having issues with this as well, using the example in the docs here as a template:
This one works:
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
from kubernetes import config
configuration = config.load_kube_config()
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CustomObjectsApi(api_client)
group = 'apps' # str | The custom resource's group name
version = 'v1' # str | The custom resource's version
namespace = 'default' # str | The custom resource's namespace
plural = 'deployments' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
body = {
'apiVersion': 'apps/v1',
'kind': 'Deployment',
'metadata': {
'name': 'k8s-py-client-nginx'
},
'spec': {
'selector': {
'matchLabels': {
'app': 'nginx'
}
},
'replicas': 1,
'template': {
'metadata': {
'labels': {
'app': 'nginx'
}
},
'spec': {
'containers': [
{
'name': 'nginx',
'image': 'nginx:1.14.2',
'ports': [
{
'containerPort': 80
}
]
}
]
}
}
}
}
try:
api_response = api_instance.create_namespaced_custom_object(group, version, namespace, plural, body)
pprint(api_response)
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
But this one fails with a 404 error:
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
from kubernetes import config
configuration = config.load_kube_config()
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CustomObjectsApi(api_client)
group = 'cert-manager.io' # str | The custom resource's group name
version = 'v1' # str | The custom resource's version
namespace = 'default'
plural = 'clusterissuers' # str | The custom resource's plural name. For TPRs this would be lowercase plural kind.
body = {
"apiVersion": "cert-manager.io/v1",
"kind": "ClusterIssuer",
"metadata": {
"name": "letsencrypt-testing"
},
"spec": {
"acme": {
"email": "[email protected]",
"server": "https://acme-staging-v02.api.letsencrypt.org/directory",
"privateKeySecretRef": {
"name": "letsencrypt-testing"
},
"solvers": [
{
"http01": {
"ingress": {
"class": "nginx"
}
}
}
]
}
}
}
try:
api_response = api_instance.create_namespaced_custom_object(group, namespace, version, plural, body)
pprint(api_response)
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
Error:
Exception when calling CustomObjectsApi->create_namespaced_custom_object: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Audit-Id': '7f3abaf4-2145-490d-a33b-67aade8afda5', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '70fe66b4-2fc4-4fb5-92a2-31dbfd958c57', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'e8ec24da-c46b-41f5-afb5-8697eb7236ca', 'Date': 'Tue, 15 Aug 2023 19:38:11 GMT', 'Content-Length': '19'})
HTTP response body: 404 page not found
I went through as many past GitHub Issues around this as I could, but couldn't find anything that helped. I ended up just wrapping kubectl
using some existing little libraries I'd written in smol-k8s-lab with rich and subprocess.
Full code example. Click to see more
"""
Using Textualize's rich library to pretty print subprocess outputs,
so during long running commands, the user isn't wondering what's going on,
even if you don't actually output anything from stdout/stderr of the command.
"""
import logging as log
from subprocess import Popen, PIPE
from rich.console import Console
from rich.theme import Theme
from rich.progress import Progress
from time import sleep
soft_theme = Theme({"info": "dim cornflower_blue",
"warn": "bold black on yellow",
"danger": "bold magenta"})
console = Console(theme=soft_theme)
def basic_syntax(bash_string=""):
"""
splits up a string and does some basic syntax highlighting
"""
parts = bash_string.split(' ')
base_cmd = f'[yellow]{parts[0]}[/yellow]'
if len(parts) == 1:
return base_cmd
else:
bash_string = bash_string.replace(parts[0], base_cmd, 1)
formatted_str = f'[cornflower_blue]{parts[1]}[/cornflower_blue]'
bash_string = bash_string.replace(parts[1], formatted_str, 1)
return bash_string
def subproc(commands=[], **kwargs):
"""
Takes a list of command strings to run in subprocess
Optional vars - default, description:
error_ok - catch Exceptions and log them, default: False
quiet - don't output from stderr/stdout, Default: False
spinner - show an animated progress spinner. can break sudo
prompts and should be turned off. Default: True
cwd - path to run commands in. Default: pwd of user
shell - use shell with subprocess or not. Default: False
env - dictionary of env variables for BASH. Default: None
"""
# get/set defaults and remove the 2 output specific args from the key word
# args dict so we can use the rest to pass into subproc.Popen later on
spinner = kwargs.pop('spinner', True)
quiet = kwargs.get('quiet', False)
if spinner:
# only need this if we're doing a progress spinner
console = Console()
for cmd in commands:
# do some very basic syntax highlighting
printed_cmd = basic_syntax(cmd)
if not quiet:
status_line = "[green] Running:[/green] "
# make sure I'm not about to print a password, oof
if 'password' not in cmd.lower():
status_line += printed_cmd
else:
status_line += printed_cmd.split('assword')[0] + \
'assword[warn]:warning: TRUNCATED'
else:
cmd_parts = printed_cmd.split(' ')
msg = '[green]Running [i]secret[/i] command:[b] ' + cmd_parts[0]
status_line = " ".join([msg, cmd_parts[1], '[dim]...'])
status_line += '\n'
# Sometimes we need to not use a little loading bar
if not spinner:
log.info(status_line, extra={"markup": True})
output = run_subprocess(cmd, **kwargs)
else:
log.debug(cmd)
with console.status(status_line,
spinner='aesthetic',
speed=0.75) as status:
output = run_subprocess(cmd, **kwargs)
return output
def run_subprocess(command, **kwargs):
"""
Takes a str commmand to run in BASH in a subprocess.
Typically run from subproc, which handles output printing.
error_ok=False, directory="", shell=False
Optional keyword vars:
error_ok - bool, catch errors, defaults to False
cwd - str, current working dir which is the dir to run command in
shell - bool, run shell or not
env - environment variables you'd like to pass in
"""
# get the values if passed in, otherwise, set defaults
quiet = kwargs.pop('quiet', False)
error_ok = kwargs.pop('error_ok', False)
try:
p = Popen(command.split(), stdout=PIPE, stderr=PIPE, **kwargs)
res = p.communicate()
return_code = p.returncode
except Exception as e:
if error_ok:
log.debug(str(e))
return str(e)
else:
raise Exception(e)
res_stdout, res_stderr = res[0].decode('UTF-8'), res[1].decode('UTF-8')
# if quiet = True, or res_stdout is empty, we hide this
if res_stdout and not quiet:
log.info(res_stdout)
# check return code, raise error if failure
if not return_code or return_code != 0:
# also scan both stdout and stdin for weird errors
for output in [res_stdout.lower(), res_stderr.lower()]:
if 'error' in output:
err = f'Return code: "{str(return_code)}". Expected code is 0.'
error_msg = f'\033[0;33m{err}\n{output}\033[00m'
if error_ok:
log.error(error_msg)
else:
raise Exception(error_msg)
# sometimes stderr is empty, but sometimes stdout is empty
for output in [res_stdout, res_stderr]:
if output:
return output
def simple_loading_bar(tasks={}, time_to_wait=120):
"""
Prints a small loading bar using rich.
Accepts a dict of {"task_name": "task"}
example: {'Installing custom resource', 'kubectl apply -f thing.yml'}
read more here:
https://rich.readthedocs.io/en/stable/progress.html
"""
for task_name, task_command in tasks.items():
with Progress(transient=True) as progress:
task1 = progress.add_task(f"[green]{task_name}...",
total=time_to_wait)
while not progress.finished:
sleep(1)
progress.update(task1, advance=2)
# loops until this succeeds
try:
subproc([task_command], spinner=False)
except Exception as reason:
log.debug(f"Encountered Exception: {reason}")
sleep(3)
progress.update(task1, advance=2)
continue
# execute if no exception
else:
progress.update(task1, completed=time_to_wait)
sleep(.1)
break
print('')
return
def apply_custom_resources(custom_resource_dict_list: dict = {}):
"""
Does a kube apply on a custom resource dict, and retries if it fails
using loading bar for progress
"""
k_cmd = 'kubectl apply --wait -f '
commands = {}
log.debug(custom_resource_dict_list)
# Write YAML data to '{XDG_CACHE_DIR}/{resource_name}.yaml'.
for custom_resource_dict in custom_resource_dict_list:
resource_name = "_".join([custom_resource_dict['kind'],
custom_resource_dict['metadata']['name']])
yaml_file_name = path.join(XDG_CACHE_DIR, f'{resource_name}.yaml')
with open(yaml_file_name, 'w') as cr_file:
dump(custom_resource_dict, cr_file)
commands[f'Installing {resource_name}'] = k_cmd + yaml_file_name
# loops with progress bar until this succeeds
simple_loading_bar(commands)
def configure_cert_manager(email_addr: str = "") -> True:
"""
Installs cert-manager ClusterIssuers for both staging and prod
"""
acme_staging = '"https://acme-staging-v02.api.letsencrypt.org/directory"'
for issuer in ['letsencrypt-staging', 'letsencrypt-prod']:
if issuer == "letsencrypt-prod":
acme_staging = acme_staging.replace("staging-", "")
issuers_dict = {
'apiVersion': "cert-manager.io/v1",
'kind': 'ClusterIssuer',
'metadata': {'name': issuer},
'spec': {
'acme': {'email': email_addr,
'server': acme_staging,
'privateKeySecretRef': {
'name': '"letsencrypt-staging"'
},
'solvers': [
{'http01': {'ingress': {'class': 'nginx'}}}
]
}
}
}
apply_custom_resources([issuers_dict])
return True
configure_cert_manager("[email protected]")
I'm still available to help provide any details needed on this issue, but wanted to help others that may feel a bit of writers block in the meantime. 💙
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.