kubectl cp will return a error "Dropping out copy after 0 retries error: unexpected EOF"
What happened: ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF ➜ ~ kubectl cp openapi-test/centos-pod:home/log ./test Dropping out copy after 0 retries error: unexpected EOF
only two successful attempts
It cannot be executed successfully 100%
What you expected to happen: I hope the kubectl cp copy directory can execute normally
Environment: ➜ ~ kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:21:56Z", GoVersion:"go1.18.4", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.10-aliyun.1", GitCommit:"2e6f009b4878915cb2f420c77b0bb5c50ffa6141", GitTreeState:"clean", BuildDate:"2023-04-24T04:42:55Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"} WARNING: version difference between client (1.24) and server (1.22) exceeds the supported minor version skew of +/-1
Please try using a leading slash when denoting the /home directory on the container, and let us know if that works. The command would be
kubectl cp openapi-test/centos-pod:/home/log ./test
...if you're copying the file /home/log from the container to local.
Additionally, try the following command to see if the file exists on the container before copying.
kubectl exec openapi-test/centos-pod -- sh -c "ls -l /home/log"
/triage needs-information
In addition to what Sean has said it also seems that the version of kubectl (and the kubernetes node version) you are on is very outdated. One thing I would suggest would be updating to a more recent version of both, but at a minimum keeping the node and client versions closer to in sync because, as noted by the warning message, kubernetes only supports a client and server within one minor version of each other, meaning you'll need to either upgrade your node to 1.23, 1.24, or 1.25; or you'll need to downgrade your client to either 1.21, 1.22, or 1.23.
Please try using a leading slash when denoting the
/homedirectory on the container, and let us know if that works. The command would bekubectl cp openapi-test/centos-pod:/home/log ./test...if you're copying the file
/home/logfrom the container to local.Additionally, try the following command to see if the file exists on the container before copying.
kubectl exec openapi-test/centos-pod -- sh -c "ls -l /home/log"
kubectl cp openapi-test/centos-pod:/home/log ./test tar: Removing leading `/' from member names Dropping out copy after 0 retries
it seems useless,kubectl will remove it
For anyone else suffering from this issue, adding --retries 10 to the kubectl cp seems to resolve the issue
/triage accepted
I wonder if this is related to https://github.com/kubernetes/kubernetes/issues/60140?
There is some work underway to transition from SPDY to Websockets, that may fix this problem.
Case
In my case, this issue happened when I copied a 300MB .json index backup file created by elasticdump to local filesystem.
---
title: System architecture
---
flowchart LR
subgraph VPC
subgraph Kubernetes Cluster
pod[Pod]
end
elasticsearch[Elasticsearch]
end
local[local]
local --"[2] kubectl cp"--> pod --"[1] elasticdump"--> elasticsearch
Solution
I solved it by adding the --retries 10 option to the kubectl cp command, as @JCSadeghi said.
For anyone else suffering from this issue, adding
--retries 10to thekubectl cpseems to resolve the issue comment.
- kubectl version : Client Version: v1.29.1
- local arch :
arm64
Detail log
During the process of copying index backup files, file copying may fail with the error Dropping out copy after 0 retries.
When running kubectl cp command, configure retries by adding the --retries 10 option.
kubectl cp <namespace>/<pod-name>:/tmp/index-market-backup.json ./index-market-backup.json --retries 10
Last failed result without --retries option:
$ kubectl cp <namespace>/<pod-name>:/tmp/index-market-backup.json ./index-market-backup.json
tar: removing leading '/' from member names
Dropping out copy after 0 retries
error: unexpected EOF
...
Successful results with adding --retries option:
$ kubectl cp <namespace>/<pod-name>:/tmp/index-market-backup.json ./index-market-backup.json --retries 10
tar: removing leading '/' from member names
Resuming copy at 315394048 bytes, retry 1/10
tar: removing leading '/' from member names
$ ls -lh $HOME/index-market-backup.json
-rw-r--r--@ 1 john.doe staff 303M 6 11 09:25 index-market-backup.json
It's not clear for what reason, but you can see that a retry occurs once almost at the end of task. The solution was to add a --retries option to prevent failure.
ran into the same issue
with retries 10
tar: Removing leading `/' from member names
Resuming copy at 83996672 bytes, retry 1/10
tar: Removing leading `/' from member names
I ran into this issue multiple times already in the past, and today it happened again. The underlying issue seems to be exec that is used internally to invoke tar c ... inside the source POD. This exec sometimes swallows the final bytes of stdout, causing the tar x ... on the local machine to fail.
This can be easily reproduced with the following commands:
# First, create a test-file that is large enough to produce some load on the stdout piping of exec
kubectl exec -n <namespace> <pod-name> -- dd if=/dev/random of=/tmp/test-file bs=1M count=200
# now run a loop that will fail after some retries. It tars the test-file inside the POD and tries to list the entries locally
while kubectl exec -i -n <namespace> <pod-name> -- tar c /tmp/test-file | tar t; do echo "succeeded, retrying"; done
After a few loops, it will output tar: short read, indicating that it unexpectedly reached EOF.
I was only able to reproduce this by directly piping the output of exec into tar. Any other setup, where I tried to get a copy of the tar lead to the loop never failing. I tried it with kubectl exec ... | tee test.tar | tar t and kubectl exec ... > test.tar && tar tf test.tar.
My guess would be that this is somehow related to how exec shuts down, missing a final flush or something like that. Might also be related to signal handling, otherwise I could not explain why I was not able to reproduce it with tee in-between.