bug: Machines being deleted on scaling up
What happened (please include outputs or screenshots):
api_response = api_instance.patch_namespaced_deployment_scale(name, namespace,{'spec': {'replicas': 2}})
When I try to scale up my machines using patch_namespaced_deployment_scale function, it deletes the already running machines and then creates new machines to scale up.
I have temporarily fixed it on my end by using kubectl scale deployment {name} --replicas=2. That worked as expected for me. It scaled up the machines without affecting the already running machines.
What you expected to happen: While scaling up, it should not delete any running machines. Rather, it should just start more machines as required.
How to reproduce it (as minimally and precisely as possible): Use patch_namespaced_deployment_scale` to scale up
Anything else we need to know?: This is all the information I have, let me know if you need anything else.
Environment:
- Kubernetes version (
kubectl version): Client Version: version.Info{Major:"1", Minor:"25"} Server Version: version.Info{Major:"1", Minor:"27+"} - OS (e.g., MacOS 10.13.6): Debian GNU/Linux 12 (bookworm)
- Python version (
python --version) : Python 3.9.17 - Python client version (
pip list | grep kubernetes): 25.3.0
/assign @yliaog
the 'scale' behavior (whether to delete currently running pods or not) is implemented on the server side, by the scale subresource.
could you add "-v 9" to the kubectl command, and check the output: $ kubectl scale deployment {name} --replicas=2 -v 9
I0911 18:13:54.540003 2367239 request.go:1212] Request Body: {"spec":{"replicas":2}} I0911 18:13:54.540058 2367239 round_trippers.go:466] curl -v -XPATCH -H "Accept: application/json, /" -H "Content-Type: application/merge-patch+json" -H "User-Agent: kubectl/v1.28.1 (linux/amd64) kubernetes/8dc49c4" 'https://34.122.252.194/apis/apps/v1/namespaces/kube-system/deployments/l7-default-backend/scale' I0911 18:13:54.611557 2367239 round_trippers.go:553] PATCH https://34.122.252.194/apis/apps/v1/namespaces/kube-system/deployments/l7-default-backend/scale 200 OK in 71 milliseconds
could you compare the above with the HTTP request send from api_instance.patch_namespaced_deployment_scale, and print out the difference?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.