Connection timeout after 10 min
Expected behavior
Copy file 140Gb from GCP project A Bucket 1 to GCP project B Bucket 2 with minio_client cp command successfull
Actual behavior
Error:
minio_client: <ERROR> Failed to copy `https://storage.googleapis.com/MY_BACKET-backup1/FILE`. Put "https://REMOTE_BACKET-backup1.storage.googleapis.com/FILE": net/http: HTTP/1.x transport connection broken: http: ContentLength=21474836480 with Body length 0
Steps to reproduce the behavior
The real issue - is 10min limit after which the operation fails. So it is possible to use smaller file but with speed limitation:
FILE is 20Gb size
Command:
time minio_client cp --limit-upload=33M gcs/MY_BACKET-backup1/FILE linked_REMOTE_ENV/REMOTE_BACKET-backup1/FILE
Same behaviour for the command:
time minio_client cp --limit-upload=33M miniogw/MY_BACKET-backup1/FILE linked_REMOTE_ENV/REMOTE_BACKET-backup1/FILE
mc --version
- (paste output of
mc --version) minio_client --version
minio_client version RELEASE.2023-08-08T17-23-59Z (commit-id=01fb7c5a96ccc8bab434d1210847279710c8ae93) Runtime: go1.19.12 linux/amd64 Copyright (c) 2015-2023 MinIO, Inc. License GNU AGPLv3 https://www.gnu.org/licenses/agpl-3.0.html
System information
The instance - n2-standard-2 located in project A
Minio config:
My config is:
{
"aliases": {
"gcs": {
"url": "https://storage.googleapis.com",
"accessKey": "KEY1",
"secretKey": "SKEY1",
"api": "s3v4",
"path": "auto"
},
"miniogw": {
"url": "http://127.0.0.1:9000",
"accessKey": "KEY2",
"secretKey": "SKEY2",
"api": "s3v4",
"path": "auto"
},
"linked_REMOTE_ENV": {
"url": "https://storage.googleapis.com",
"accessKey": "KEY3",
"secretKey": "SKEY3",
"api": "s3v4",
"path": "auto"
}
},
"version": "10"
}
Additional info
Every copy operation fails if it takes more then 10min
At the same time "curl PUT" works without errors even if it is more then 20 min
I assume the issue could be related with some keep_alive parameters inside minio client or some limitations of s3 protocol.
No proxies, load balancers, host limits (tried with different keepalive parameters as well), no firewall restrictions.
Hi, Any updates here? Or at least some details explaining this issue?
@drama17 I suspect this might be GCP timeouts/limits. mc cp is being used daily to my quite a large number of files, even with this size and we are not experiencing this problem anywhere.
However, I will do a couple of tests when I find some time.
Hi @zveinn
I've tried again, but with the latest mc version. The result is the same. Here is the end of the output with debug mode on
/var/lib/mongo/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 52.60 MiB/sminio_client: <DEBUG> PUT /somebucketname-dev1-backup1/tmp/FILE?partNumber=10000&uploadId=13d85232-9ed5-4ac5-9319-b34b9065e5c0 HTTP/1.1
Host: 127.0.0.1:9000
User-Agent: MinIO (linux; amd64) minio-go/v7.0.67 minio_client/RELEASE.2024-03-09T06-43-06Z
Content-Length: 16800342
Accept-Encoding: zstd,gzip
Authorization: AWS4-HMAC-SHA256 Credential=1234/20240313/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length,Signature=**REDACTED**
X-Amz-Content-Sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD
X-Amz-Date: 20240313T135311Z
X-Amz-Decoded-Content-Length: 16777216
minio_client: <DEBUG> HTTP/1.1 200 OK
Content-Length: 0
Accept-Ranges: bytes
Content-Security-Policy: block-all-mixed-content
Date: Wed, 13 Mar 2024 13:53:13 GMT
Etag: "d2a55bcc2d59421b9e1a9df4c0c33a73-1"
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Request-Id: 17BC575784F5E7F6
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
minio_client: <DEBUG> Response Time: 1.622878476s
/var/lib/mongo/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 52.43 MiB/sminio_client: <DEBUG> POST /somebucketname-dev1-backup1/tmp/FILE?uploadId=13d85232-9ed5-4ac5-9319-b34b9065e5c0 HTTP/1.1
Host: 127.0.0.1:9000
User-Agent: MinIO (linux; amd64) minio-go/v7.0.67 minio_client/RELEASE.2024-03-09T06-43-06Z
Content-Length: 888993
Accept-Encoding: zstd,gzip
Authorization: AWS4-HMAC-SHA256 Credential=1234/20240313/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
Content-Type: application/octet-stream
X-Amz-Content-Sha256: 4d9601daa24a2386b1e623dd25aa659980e072897029c432967da2910ac0a887
X-Amz-Date: 20240313T135313Z
minio_client: <DEBUG> HTTP/1.1 200 OK
Transfer-Encoding: chunked
Cache-Control: no-cache
Content-Encoding: gzip
Content-Security-Policy: block-all-mixed-content
Content-Type: text/event-stream
Date: Wed, 13 Mar 2024 13:53:23 GMT
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Accel-Buffering: no
X-Amz-Request-Id: 17BC5757E758C369
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
minio_client: <DEBUG> Response Time: 10.150146034s
/var/lib/mongo/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 46.64 MiB/sminio_client: <DEBUG> DELETE /somebucketname-dev1-backup1/tmp/FILE?uploadId=13d85232-9ed5-4ac5-9319-b34b9065e5c0 HTTP/1.1
Host: 127.0.0.1:9000
User-Agent: MinIO (linux; amd64) minio-go/v7.0.67 minio_client/RELEASE.2024-03-09T06-43-06Z
Accept-Encoding: zstd,gzip
Authorization: AWS4-HMAC-SHA256 Credential=1234/20240313/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240313T135557Z
minio_client: <DEBUG> HTTP/1.1 204 No Content
Accept-Ranges: bytes
Content-Security-Policy: block-all-mixed-content
Date: Wed, 13 Mar 2024 13:59:42 GMT
Server: MinIO
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Request-Id: 17BC577E2A58BED4
X-Content-Type-Options: nosniff
X-Xss-Protection: 1; mode=block
minio_client: <DEBUG> Response Time: 3m44.326147268s
minio_client: <ERROR> Failed to copy `/var/lib/mongo/tmp/FILE`. Bucket name `somebucketname-dev1-backup1` not valid.
(3) cp-main.go:626 cmd.doCopySession(..) Tags: [/var/lib/mongo/tmp/FILE]
(2) common-methods.go:570 cmd.uploadSourceToTargetURL(..) Tags: [/var/lib/mongo/tmp/FILE]
(1) common-methods.go:274 cmd.putTargetStream(..) Tags: [miniogw, http://127.0.0.1:9000/somebucketname-dev1-backup1/tmp/FILE]
(0) client-s3.go:1225 cmd.(*S3Client).Put(..)
Commit:1ec55a5178d7 | Release-Tag:RELEASE.2024-03-09T06-43-06Z | Host:dev1-app1 | OS:linux | Arch:amd64 | Lang:go1.21.8 | Mem:3.9 MiB/193 MiB | Heap:3.9 MiB/183 MiB
/var/lib/mongo/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 46.64 MiB/s 57m10s
I noticed, that Content-Length and X-Amz-Decoded-Content-Length were the same during the whole process:
Content-Length: 16800342 X-Amz-Decoded-Content-Length: 16777216
But in the last block it is:
Content-Length: 888993
and no X-Amz-Decoded-Content-Length
Also this raw is different:
Authorization: AWS4-HMAC-SHA256 Credential=1234/20240313/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
Can you run the command again like this:
mc cp --json --debug source target > out.log
if the log is too big to paste here then please upload it to something like we transfer where I can access it.
Maybe also try adding conn-read-deadline=60m --conn-write-deadline=60m?
@klauspost this solved my issue, now everything works good.
minio_client cp --conn-write-deadline=60m --conn-read-deadline=60m miniogw/first-dev1-backup1/tmp/FILE linked_second_gcp_project-dev1/second-dev1-backup1/tmp/FILE
dev1-backup1/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 147.70 MiB/s 18m3s
So @klauspost , @zveinn thank you for your assistance.. Maybe this option could be added to the documentation (I didn't saw it there)
P.S.: (FYI) I've faced another issue (which is not critical to me) during the tests - when I tested, I used --debug option and when the file was copied, in a seconds all the memory (30Gb) were used, procs usage grew up to 100%, and load average grew more then 120 and if I missed to press ctrl+C - instance became unavailable. As I had such issue I used --debug option all the time... trying to debug and find the root cause not knowing that it was the root cause lol
hmm, might be that --debug is allocating the debug logs without de-allocating. I'll have a look at that.
I will also look into adding the connection deadline parameters to the documentation.
Glad it worked out!
@zveinn This is not really intended. We should probably look into removing this and have some alternative timeout method.
Transfers can take a long time and it should not be needed to set a custom timeouts.
Noted, I've linked to this issue for when I look into this.
@klauspost this solved my issue, now everything works good.
minio_client cp --conn-write-deadline=60m --conn-read-deadline=60m miniogw/first-dev1-backup1/tmp/FILE linked_second_gcp_project-dev1/second-dev1-backup1/tmp/FILE dev1-backup1/tmp/FILE: 156.25 GiB / 156.25 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 147.70 MiB/s 18m3sSo @klauspost , @zveinn thank you for your assistance.. Maybe this option could be added to the documentation (I didn't saw it there)
P.S.: (FYI) I've faced another issue (which is not critical to me) during the tests - when I tested, I used
--debugoption and when the file was copied, in a seconds all the memory (30Gb) were used, procs usage grew up to 100%, and load average grew more then 120 and if I missed to press ctrl+C - instance became unavailable. As I had such issue I used--debugoption all the time... trying to debug and find the root cause not knowing that it was the root cause lol
We really don't want users to be setting these so it not documented on purpose.
Your network cannot be really that unstable where you are sending packets out every 1hr. That doesn't make any sense to me.
It works with that tells me that your client network is broken should be fixed first.
@harshavardhana the network is fine. The situation is the next: I need to transfer 160Gb file (I have also 460Gb file to transfer, but I didn't tested with it yet) from on GCP project to another. With the speed ~130-140 Mb/s it takes ~18min. But as I mentioned above, any copy operation fails if it takes more then 10 min. Can't understand where did you see "unstable broken network"
The timeout that is set is on per stream, not for the entire request, so a stream must be idle for up to 10 minutes for this to get triggered.
That is why it's a network problem unless the deadlines are completely wrong on our end or the go simply doesn't behave properly.
I did some testing for the deadlines. I transferred a 5gb file with an upload limit of 1KiB and added logs for read/write ops. Even running at 1KiB and the default read/write deadline of 10min we still don't get any problems.
If there is even a small amount of data received then the connection will not hit read/write deadlines and 1KiB/s was enough to keep things alive.
Just some additional info.
I've ran several tests on GCP and on AWS (same setup as GCP) For 5GB file all worked perfect even when copy operation took 1.5h:
On GCP:
minio_client cp --limit-upload=10M FILE miniogw/gcp-project-dev1-backup1/tmp/
/mnt/maint-data/FILE: 5.00 GiB / 5.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 9.26 MiB/s 9m12s
minio_client cp --limit-upload=8M FILE miniogw/gcp-project-dev1-backup1/tmp/
/mnt/maint-data/FILE: 5.00 GiB / 5.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 7.46 MiB/s 11m26s
minio_client cp --limit-upload=1M FILE miniogw/gcp-project-dev1-backup1/tmp/
/mnt/maint-data/FILE: 5.00 GiB / 5.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 972.52 KiB/s 1h29m51s
On AWS:
minio_client cp --limit-upload=10M FILE miniogw/aws-project-stag1-backup1/tmp/
/var/lib/mongo/FILE: 5.00 GiB / 5.00 GiB ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.54 MiB/s 8m56s
minio_client cp --limit-upload=8M FILE miniogw/aws-project-stag1-backup1/tmp/
/var/lib/mongo/FILE: 5.00 GiB / 5.00 GiB ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.63 MiB/s 11m11s
minio_client cp --limit-upload=1M FILE miniogw/aws-project-stag1-backup1/tmp/
/var/lib/mongo/FILE: 5.00 GiB / 5.00 GiB ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 975.34 KiB/s 1h29m35s
But with the bigger file (120 Gb) it failed on GCP, but worked fine on AWS:
On GCP:
minio_client cp --limit-upload=80M /var/lib/mongo/FILE miniogw/gcp-project-dev1-backup1/tmp/
minio_client: <ERROR> Failed to copy `/var/lib/mongo/FILE`. Bucket name `gcp-project-dev1-backup1` not valid.
/var/lib/mongo/FILE: 120.00 GiB / 120.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 55.14 MiB/s 37m8s
minio_client cp --limit-upload=80M /var/lib/mongo/FILE miniogw/gcp-project-dev1-backup1/tmp/
minio_client: <ERROR> Failed to copy `/var/lib/mongo/FILE`. Bucket name `gcp-project-dev1-backup1` not valid.
/var/lib/mongo/FILE: 120.00 GiB / 120.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 55.05 MiB/s 37m12s
minio_client cp /var/lib/mongo/FILE miniogw/gcp-project-dev1-backup1/tmp/
minio_client: <ERROR> Failed to copy `/var/lib/mongo/FILE`. Bucket name `gcp-project-dev1-backup1` not valid.
/var/lib/mongo/FILE: 120.00 GiB / 120.00 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 48.11 MiB/s 42m34s
On AWS:
minio_client cp --limit-upload=80M FILE miniogw/aws-project-stag1-backup1/tmp/
/var/lib/mongo/FILE: 120.00 GiB / 120.00 GiB ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.08 MiB/s 26m55s
minio_client cp FILE miniogw/aws-project-stag1-backup1/tmp/
/var/lib/mongo/FILE: 120.00 GiB / 120.00 GiB ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.42 MiB/s 17m8s
It seems like the google cloud endpoint is behaving badly, either it's severing your connection after a while or it's returning an invalid bucket name during a stat/list command.
You might want to raise an issue with google about this.
Yeah, this is why I said it doesn't at all look like an mc problem;