Update image used for image volume task
Description
This PR resolves issues I ran into following Use an Image Volume With a Pod with containerd:
- The pod gets stuck in ErrImagePull because of the following error, also observed in CI
Failed to pull image "quay.io/crio/artifact:v1": failed to pull and unpack image "quay.io/crio/artifact:v1": number of layers and diffIDs don't match: 1 != 0- This was similarly fixed in CI by moving away from that image in kubernetes/kubernetes#130135
- The syntax for the
kubectl attachcommand was invalid. I updated it to usekubectl execinstead.
Issue
Closes: #
/cc @saschagrunert
Pull request preview available for checking
Built without sensitive environment variables
| Name | Link |
|---|---|
| Latest commit | 24953114c09a216cf2275890f28a10540fdda2ae |
| Latest deploy log | https://app.netlify.com/sites/kubernetes-io-main-staging/deploys/67db597cafebe200083e160e |
| Deploy Preview | https://deploy-preview-50158--kubernetes-io-main-staging.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
We probably have to incorporate that into https://github.com/kubernetes/website/pull/49936
Testing this PR locally on Killercoda has been difficult. This part in the documentation isn't clear.
- The container runtime needs to support the image volumes feature
- You need to exec commands in the host
- You need to be able to exec into pods
- You need to enable the `ImageVolume` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
Which container runtime supports it? I am using io.containerd.runc.v2
Which commands are executing on the host? This page doesn't include it.
You may not be able to use Killercoda to test this alpha feature @network-charles
Alright
We may need to remove this line, it says we can test it on Killercoda.
https://github.com/kubernetes/website/blob/24953114c09a216cf2275890f28a10540fdda2ae/content/en/docs/tasks/configure-pod-container/image-volumes.md?plain=1#L18
I don't think https://kubernetes.io/docs/tasks/configure-pod-container/image-volumes/#before-you-begin is outright wrong. In a separate PR, we could clarify that the playgrounds (that are external) may not support all alpha / beta features.
This PR is about changing the example volume.
Alright
Since it's a minor fix, it’d be great if @nojnhuh would resolve it here rather than opening a new PR.
(writing as an approver for English) @nojnhuh you are welcome to keep this PR fixed on the one issue. We prefer to open separate issues when we spot unrelated concerns; @network-charles if you're willing to, you could open.
Alright, I will.
Can you tell me how to test the example locally?
Maybe we could add a hint like this to the page! Anyway, try:
minikube start --feature-gates=ImageVolume=true
@network-charles this is the wrong change though so as things stand it is not useful to put in a lot of effort to test it.
Sorry, what do you mean by “this is the wrong change”?
Sorry, what do you mean by “this is the wrong change”?
https://github.com/kubernetes/website/pull/50158#pullrequestreview-2703072347
The image @nojnhuh is suggesting isn't a good example of an image to use as an image volume, because it's an executable image.
Ideally, we pick an image that isn't executable the way a traditional container would be.
My main motivation for this PR was to modify the example to also work with containerd, and that runtime doesn't support mounting non-executable OCI artifacts yet: https://github.com/containerd/containerd/issues/11381
I understand mounting a "regular" container image is the less interesting use case, but that happens to be the lowest common denominator among what works with both CRI-O and containerd at the moment AFAIK.
Hi @sftim, the Alpine image (docker.io/library/alpine:3) you suggested didn't work for me on my minikube cluster.
kubectl get pod
NAME READY STATUS RESTARTS AGE
image-volume 0/1 CreateContainerError 0 5m41s
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign natalisucks for approval. For more information see the Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.