deployment update does not remove pod containers if some have been replaced
What happened (please include outputs or screenshots): After a deployment update, if the list of containers has changed, recreated pod container list does not match one provided in update request What you expected to happen: After the deployment update, restarted pod containers are ones specified in request, no more, no less
How to reproduce it (as minimally and precisely as possible): created a deployment with one pod running 5 containers:
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc get pods
NAME READY STATUS RESTARTS AGE
alis-x1p-white-586b579f97-npsbs 5/5 Running 0 32s
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc get deployment alis-x1p-white -o 'jsonpath={.items[0].spec.template.spec.containers[*].name}' comip comsnmp transflux tracerelay confdata
Then i call the patch_namespaced_deployment(name="alis-x1p-white", namespace="ci", body=body) method with
body =
{'api_version': None,
'kind': None,
'metadata': {'annotations': {'configmap.reloader.stakater.com/reload': 'alis-x1p-white-ini',
'deployment.kubernetes.io/revision': '1'},
'labels': {'alis-kind': 'x1p',
'alis-product': 'x1p',
'app.kubernetes.io/instance': 'alis-x1p-white',
'app.kubernetes.io/managed-by': 'kalisto'},
'managed_fields': None,
'name': 'alis-x1p-white',
'namespace': None,...
},
'spec': {...,
'selector': {'match_expressions': None,
'match_labels': {'alis-kind': 'x1p',
'alis-product': 'x1p',
'app.kubernetes.io/instance': 'alis-x1p-white',
'app.kubernetes.io/managed-by': 'kalisto'}},
'strategy': {'rolling_update': None, 'type': 'Recreate'},
'template': {'metadata': {'annotations': {'k8s.v1.cni.cncf.io/networks': 'alis-x1p-white-if-bind-eno4'},
'labels': {'alis-kind': 'x1p',
'alis-product': 'x1p',
'app.kubernetes.io/instance': 'alis-x1p-white',
'app.kubernetes.io/managed-by': 'kalisto'},
'managed_fields': None,
'name': None,...},
'spec': {...,
'containers': [{'args': ['-m', 'x1p', '-j', 'ComIP'],
'command': ['/opt/run_in_k8s'],
'image': 'image-registry:5000/ci/alis:19',
'image_pull_policy': 'Always',
'liveness_probe': None,
'name': 'comip',
{'args': ['-m', 'x1p', '-j', 'ComHI3'],
'command': ['/opt/run_in_k8s'],
'image': 'image-registry:5000/ci/alis:19',
'image_pull_policy': 'Always',
'liveness_probe': None,
'name': 'comhi3',
{'args': ['-m', 'x1p', '-j', 'TransFlux'],
'command': ['/opt/run_in_k8s'],
'image': 'image-registry:5000/ci/alis:19',
'image_pull_policy': 'Always',
'liveness_probe': None,
'name': 'transflux',...}
{'args': ['-m', 'x1p', '-j', 'TraceRelay'],
'command': ['/opt/run_in_k8s'],
'image': 'image-registry:5000/ci/alis:19',
'image_pull_policy': 'Always',
'liveness_probe': None,
'name': 'tracerelay',...}
{'args': ['-m', 'x1p', '-j','ConfData'],
'command': ['/opt/run_in_k8s'],
'image': 'image-registry:5000/ci/alis:19',
'image_pull_policy': 'Always',
'liveness_probe': None,
'name': 'confdata',...}
After pod has restarted, one container which name is comsnmp was not removed, and new container named comhi3 has been created as expected
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc get pods
NAME READY STATUS RESTARTS AGE
alis-x1p-white-6cb476f468-nzzrr 0/6 ContainerCreating 0 11s
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc get deployment -o 'jsonpath={.items[0].spec.template.spec.containers[*].name}'
comip comhi3 comsnmp transflux tracerelay confdata
comsnmp container wasn't removed has expected since it was not present in the V1DeploymentSpec object
Anything else we need to know?: If i edit the deployment manually using kubectl, then the container is removed
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc edit deployment alis-x1p-white
"/tmp/oc-edit-1167198696.yaml" 228L, 6914C written
deployment.apps/alis-x1p-white edited
[jcourtat@FR-JCT-F9PV25J rhel8] (master)$ oc get deployment -o 'jsonpath={.items[0].spec.template.spec.containers[*].name}'
comip comhi3 transflux tracerelay confdata
Environment:
- Kubernetes version (
kubectl version): Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v0.21.0-beta.1", GitCommit:"96e95cef877ba04872b88e4e2597eabb0174d182", GitTreeState:"clean", BuildDate:"2021-11-15T14:54:31Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5+5c84e52", GitCommit:"ce18cbe56f6e88a8fc0e06366afe113b415ad39b", GitTreeState:"clean", BuildDate:"2022-03-01T18:44:38Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"} - OS (e.g., MacOS 10.13.6): RHEL7
- Python version (
python3.6 --version): Python 3.6.8 - Python client version (
python3.6 -m pip list | grep kubernetes): kubernetes 22.6.0
Regards,
Julien
Hi @jcourtat, I think this is caused by patch strategy, maybe you can find the solution from this doc
/assign @showjason Thank you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.