kubernetes.client.api.core_v1_api.delete_node_with_http_info deletes all nodes when `name` is an empty string.
What happened (please include outputs or screenshots):
The python client library allows deletion of all nodes in a cluster when passing in an empty string. THIS IS BAD!!
As part of code we use, we call:
k8s_client = kubernetes.client.ApiClient(configuration=client_config)
k8s_client.delete_node(node_name, body=kubernetes.client.V1DeleteOptions())
When we do this, we discovered an issue where we accidentally passed in a node_name="", which leads to this message:
5924 1 httplog.go:132] "HTTP" verb="DELETE" URI="/api/v1/nodes/" latency="182.687862ms" userAgent="OpenAPI-Generator/27.2.0/python" audit-ID="86d16672-050c-4f8d-a720-052e782c049e" srcIP="10.10.1.195:7186" apf_pl="exempt" apf_fs="exempt" apf_execution_time="153.526678ms" resp=200
Which turns out to delete all nodes from the API server on the cluster. OOPSIE!
Corresponding queries to the API show that our nodes indeed disappear.
What you expected to happen:
The delete_node/delete_node_with_http_info method should fail if the name is blank (name="").
How to reproduce it (as minimally and precisely as possible):
Open a python shell and import the kubernetes python code. Initialize it and call delete_node("") which should raise an ApiValueError
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version): 1.27 - OS (e.g., MacOS 10.13.6): Flatcar Linux, but has been replicated on MacOS and Fedora Linux
- Python version (
python --version) 3.11.4 - Python client version (
pip list | grep kubernetes) 27.2.0
As a fun aside, the log message that says you've deleted all your nodes only shows up if your apiserver's log verbosity is set to at least 4. If you have -v 2, then all of your nodes disappear and you have no idea why.
/assign @herlo
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten