kubectl
kubectl copied to clipboard
debug: ability to replicate volume mounts
What would you like to be added:
When debugging by adding a container to a pod, having the ability to match volume mounts with the target container.
Specifically:
Why is this needed:
From a new debug container added to a running pod, I can't access data other containers can see on volume mounts. Obvious candidates: emptyDir, NFS mounts, configmap/secret mounts, EBS volumes, etc
Ideas:
- Replicate some/all volume mounts from one other container
- Expose all the volumes available to the pod so they can be mounted by name:
--volume-mount=name:/pathThere are a lot of options exposed (subPaths, read-only, etc) though to try and cram into a CLI option.
IMO (1) is probably easiest and solves most use cases. Maybe something like:
--with-volume-mounts[=<container>]Container name to attempt to replicate volume mounts from. Without a container name, uses the container from--target.
/cc @verb
This fits within the theme of providing a bit more configurability for kubectl debug. I'll add it to the list in kubernetes/enhancements#1441 to inform the design.
@rcoup for your use case does accessing the target container's root filesystem via /proc/$PID/root work? (assuming linux containers)
for your use case does accessing the target container's root filesystem via
/proc/$PID/rootwork?
I get ls: cannot access '/proc/7/root/': Permission denied, so I guess I need https://github.com/kubernetes/kubernetes/issues/97103 resolved first? (or add caps to / make my original pod privileged?)
That's interesting. no special capabilities are required when shareProcessNamespace is true, but otherwise SYS_PTRACE is required. I didn't realize that.
Ok, so no, this trick doesn't work for your use case.
It would be nice, if this feature is usable without the --copy-to argument.
/assign @verb /triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
This is still a viable feature.
@morremeyer: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
This is still a viable feature.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Can a collaborator please reopen this? This is still a feature that would make sense to be implemented.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@verb, may I ask the schedule of this feature? I didn't find it in KEP(https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/277-ephemeral-containers)
Thanks in advance!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
A good example of a use case for this is the dotnet monitor. It expects to communicate with the dotnet service over a socket mounted in as a volume. See: https://github.com/dotnet/dotnet-monitor/blob/main/documentation/kubernetes.md
Being able to use ephemeral containers to debug existing pods in a non intrusive way would be a great thing. ;-)
Maybe is a separate issue but I would think it would be interesting to extend this functionality even earlier. Hopefully most of us don't allow writing to root disk inside our containers but all apps don't need /tmp disks or similar to run.
It would be really nice if a debug container also can attach ephemeral disk to a container or maybe even better any volume you want. This way you could save some data from your debugging as well, for example profiling.
For anyone who (like me) finds this issue while searching for a solution, I wanted to share that you can do this by making a direct patch to the ephemeralcontainers subresource, rather than using kubectl debug. For example, I needed to mount the same ca-certificates that were mounted in the target container, in the debug container:
$ kubectl proxy
$ curl http://localhost:8001/api/v1/namespaces/your-namespace/pods/your-pod-name/ephemeralcontainers \
-X PATCH \
-H 'Content-Type: application/strategic-merge-patch+json' \
-d '
{
"spec":
{
"ephemeralContainers":
[
{
"name": "debugger",
"command": ["sh"],
"image": "your-debug-image",
"targetContainerName": "your-target-container-name",
"stdin": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/etc/ca-certificates",
"name": "ca-certificates",
"readOnly": true
}]
}
]
}
}'
$ kubectl -n your-namespace attach your-pod-name -c debugger -ti
For anyone who (like me) finds this issue while searching for a solution, I wanted to share that you can do this by making a direct patch to the
ephemeralcontainerssubresource, rather than usingkubectl debug. For example, I needed to mount the same ca-certificates that were mounted in the target container, in the debug container:$ kubectl proxy $ curl http://localhost:8001/api/v1/namespaces/your-namespace/pods/your-pod-name/ephemeralcontainers \ -X PATCH \ -H 'Content-Type: application/strategic-merge-patch+json' \ -d ' { "spec": { "ephemeralContainers": [ { "name": "debugger", "command": ["sh"], "image": "your-debug-image", "targetContainerName": "your-target-container-name", "stdin": true, "tty": true, "volumeMounts": [{ "mountPath": "/etc/ca-certificates", "name": "ca-certificates", "readOnly": true }] } ] } }' $ kubectl -n your-namespace attach your-pod-name -c debugger -ti
FWIW, I made a shell (zsh, at least) function that copies environment variables, envFrom configuration, and volume mounts and does this manually via a kubectl, jq, and curl commands. Note that the image is hardcoded to alpine:latest, but that's easily changed:
https://gist.github.com/nathanmcgarvey-modopayments/358a84297086de3975f54895a9e7123d
Edit: Note that I only tested this on AWS EKS 1.27. YMMV.
Can I request that ephemeralcontainers be added to the allowed subresources of Pods for kubectl patch? This makes the above 3 steps a bit easier as one can simply run kubectl patch --type=merge --patch-file=<file> and then attach, instead of needing to run kubectl proxy as well (which creates a local port, anyone on the same server can send API requests as the person running kubectl proxy).
This will be handled with https://github.com/kubernetes/kubernetes/pull/120346 and I'm closing because we'll track this feature from the KEP; /close
@ardaguclu: Closing this issue.
In response to this:
This will be handled with https://github.com/kubernetes/kubernetes/pull/120346 and I'm closing because we'll track this feature from the KEP; /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.