cri-tools
cri-tools copied to clipboard
crictl rmi with force option
looking for the Docker equivalence docker rmi -f imageid. Is there any?
Hey @AsherShoshan, is this the same request as in https://github.com/kubernetes-sigs/cri-tools/issues/399? Do you think we can close this on if favor of it?
No. It's not the same.
On Mon, Aug 19, 2019 at 4:19 PM Sascha Grunert [email protected] wrote:
Hey @AsherShoshan https://github.com/AsherShoshan, is this the same request as in #399 https://github.com/kubernetes-sigs/cri-tools/issues/399? Do you think we can close this on if favor of it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/cri-tools/issues/504?email_source=notifications&email_token=ALCEMQPSHGHONCKMMSRLNU3QFKMXHA5CNFSM4ILH2YZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4S44SQ#issuecomment-522571338, or mute the thread https://github.com/notifications/unsubscribe-auth/ALCEMQKZGRMWZH56HF5ODITQFKMXHANCNFSM4ILH2YZQ .
The Kubernetes CRI is not able to specify something like a force flag for image removal. For example CRI-O fails if the image is currently in use. The API states that the call should not return an error if the image has been already removed. @feiskyer, is there any expected behavior when an image is in use?
All,
The use case is:
If you a pod with container image
On Wed, Sep 4, 2019 at 2:34 PM Sascha Grunert [email protected] wrote:
The Kubernetes CRI is not able to specify something like a force flag for image removal. For example CRI-O fails if the image is currently in use. The API states that the call should not return an error if the image has been already removed. @feiskyer https://github.com/feiskyer, is there any expected behavior when an image is in use?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/cri-tools/issues/504?email_source=notifications&email_token=ALCEMQNXAECZOHWT2HJLUT3QH6MNLA5CNFSM4ILH2YZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD53ISLA#issuecomment-527862060, or mute the thread https://github.com/notifications/unsubscribe-auth/ALCEMQJNRDFQZVPM7INAXZ3QH6MNLANCNFSM4ILH2YZQ .
Alright, so one approach could be to force stop the containers before actually removing the image. The problem is now that we cannot guarantee that the kubelet does not re-schedule the workloads before we actually were able to remove the image. :-/
In docker, if you forcefully remove an image, then there will be still one image left where the container runs in. This image cannot be removed and docker will reject it by something like:
> docker rmi -f 961769676411
Error response from daemon: conflict: unable to delete 961769676411 (cannot be forced) - image is being used by running container b1bdc1cf2207
All, The use case is: If you a pod with container image
:latest, and the imagePullPolicy is "ifnotpresent" - Then you have no way to refresh the image with a newer image.
You could pull the new image via crictl pull and restart the containers.
On Wed, Sep 4, 2019 at 4:37 PM Sascha Grunert [email protected] wrote:
All, The use case is: If you a pod with container image :latest, and the imagePullPolicy is "ifnotpresent" - Then you have no way to refresh the image with a newer image.
You could pull the new image via crictl pull and restart the containers.
Do it on every node...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/cri-tools/issues/504?email_source=notifications&email_token=ALCEMQNITPWCIOFG6O4IQW3QH622BA5CNFSM4ILH2YZ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD53S4XA#issuecomment-527904348, or mute the thread https://github.com/notifications/unsubscribe-auth/ALCEMQNFYH6WHWD7Z5GPYTLQH622BANCNFSM4ILH2YZQ .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
any news here?
/reopen
@abdennour: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@abdennour thanks for reaching out! Do you consider working on a solution around this?
I am thinking about -f option to be added @saschagrunert
I am thinking about
-foption to be added @saschagrunert
We would have to ensure that the containers are stopped before removing the image. I mean, yeah let's propose that as a PR.
/assign @abdennour
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.