S3 compliant Storage: Transfering payloads in multiple chunks using aws-chunked is not supported.
Summary
I´m trying to create a lakefs repository stored on a S3 compliant Object Storage from the cloud provider OVH. I have been able to create the repository in a bucket in AWS S3 Object Storage and also in a bucket in Minio. However, I would like to use a bucket from OVH S3 Compliant Object Storage.
From the kubernetes pod logs, I´m getting the following error:
time="2021-09-14T10:13:24Z" level=error msg="bad S3 PutObject response" func="pkg/block/s3.(*Adapter).streamToS3" file="build/pkg/block/s3/adapter.go:233" error="s3 error: <?xml version='1.0' encoding='UTF-8'?>\n<Error><Code>NotImplemented</Code><Message>Transfering payloads in multiple chunks using aws-chunked is not supported.</Message><RequestId>tx383dd527c8b74b65a3ecd-00614075c6</RequestId></Error>" host="127.0.0.1:31594" method=POST operation=PutObject path=/api/v1/repositories request_id=9d9ae76d-741c-418a-ba00-ad9210267ed8 service_name=rest_api status_code=501 url="https://ovhbucket.s3.sbg.cloud.ovh.net/dummy"
The bucket mybucket is already created in OVH Object storage and I´m able to access and manage it by using aws cli.
I think that OVH S3 allows uploading multi-part objects. Maybe it has something to see with the headers used when calling the S3 REST API?
Details
I have installed lakefs using helm on a kubernetes cluster (deployed using microk8s on a Ubuntu Server VM).
microk8s helm3 install -f ~/ovh-conf-values.yaml lakefs lakefs/lakefs
Where ovh-conf-values.yaml contains:
secrets:
databaseConnectionString: postgres://postgres:[email protected]:5432/postgres?sslmode=disable
authEncryptSecretKey: [SECRET_KEY]
service:
type: NodePort
port: 5434
lakefsConfig: |
database:
connection_string: "postgres://postgres:[email protected]:5432/postgres?sslmode=disable"
logging:
format: text
level: DEBUG
output: "-"
auth:
encrypt:
secret_key: [SECRET_KEY]
blockstore:
type: s3
s3:
region: sbg
endpoint: https://s3.sbg.cloud.ovh.net
credentials:
access_key_id: [OVH_ACCESS_KEY]
secret_access_key: [OVH_SECRET_KEY]
I have been trying with different configurations following different examples (AWS config, minio config, etc.) from the lakefs configuration reference but I´m always getting the mentioned error.
For creating the repository, I have tried through the lakefs Web Interface and also installing and configuring lakectl but I get the same error from both.
lakectl repo create lakefs://ovhrepo s3://ovhbucket
Thanks for reporting @kevinv21! I'll try to give some context into what's going on:
When uploading an object into S3, the AP requires signing the request and passing that signature as part of its headers. In order to sign the request, the body must be read in its entirety. For small objects, it's possible for lakeFS to do this in-memory: read the request from the user, buffering it into memory, wait for all of it to be received, calculate a signature and generate a request for S3. As you'd imagine, this would make scaling lakeFS much harder as the memory requirements would then have to be the sum of all concurrent request bodies. Hard to predict and expensive to scale.
To work around that, we rely on an HTTP capability that S3 supports: Chunked Transfer-Encoding. This allows reading small chunks from the originating request, signing them individually and moving on to the next chunk. It's an AWS API feature: adding "aws-chunked" as the Content-Encoding header value and passing in chunks that contain individual signatures, we are able calculate signatures without first reading in the entire request.
This also works for MinIO and other S3-compatible storage systems we've seen, but apparently not OVH, Given the error message you've provided "Transferring payloads in multiple chunks using aws-chunked is not supported."
Side note: Another way around this is to turn large PutObject requests into multi-part uploads - the downside and the reason we chose not to do that, is that multipart uploads incur a different per-request cost than normal uploads. This makes calculating the TCO when running on native S3 harder for users.
I can think of two solutions:
- special-casing: in the S3 block adapter, if we can figure out in runtime that chunking is not supported (i.e. getting a
NotImplementedwhen attempting a write), change the behavior from chunking to multipart uploads automatically. - writing an adapter for OVH: that basically wraps the S3 adapter, replacing the
streamToS3function with one that does multipart uploads
Edit: While these 2 solutions are straight forward enough for implementing PutObject, this also needs to be done for multipart uploads, which complicates things further (i.e. taking UploadPart requests and potentially breaking them up into multiple UploadPart requests)
#2527 is the same issue, I think, but with a different S3 "compatible" object store.
Hi,
I confirm S3 "aws-chunked" is not supported by OVHcloud offer. The use of multi-part upload in streamToS3 function will have no impact on the TCO as API calls are included in the offer.
@kevinv21, you opened an issue on the OVH public cloud roadmap. Thanks for that! Others: https://github.com/ovh/public-cloud-roadmap/issues/129#issuecomment-1112928815 seems to indicate this is resolved and on the way to users. Will be happy to hear of your experiences!
I think this may be an issue on Swift storage as well. I get an output like this:
time="2022-05-27T15:12:18Z" level=error msg="bad S3 PutObject response" func="pkg/block/s3.(*Adapter).streamToS3" file="build/pkg/block/s3/adapter.go:250" error="s3 error: <?xml version=\"1.0\" encoding=\"UTF-8\"?><Error><Code>NotImplemented</Code><RequestId>tx000000000000045e10e0e-006290ea51-385b2f-default</RequestId><HostId>385b2f-default-default</HostId></Error>" host="localhost:8000" method=POST operation=PutObject path=/api/v1/repositories request_id=2a4f8a05-c724-48d9-9dae-fbc671e730f5 service_name=rest_api status_code=501 url="https://my-swift-endpoint/lakefs/dummy"
Can't quite tell if this is the same issue since it just says NotImplemented. The Swift/S3 support matrix is here, it looks like most operations are supported.
Thanks for reporting this @jacobdanovitch! According to the compatibility matrix it should be supported.. Can you share which version of Swift/Open Stack you're running? If it's a recent version, I believe this should be reported as a Swift bug
It is supported on the new S3 classes of storage, that are not based on Swift but S3 middleware . Try the 2 tiers of storage : High performances (endpoint : https://s3.gra.perf.cloud.ovh.net) or Standard object storage (suscribe first here : https://labs.ovh.com/, endpoint : https://s3.gra.io.cloud.ovh.net/)
https://docs.ovh.com/fr/storage/s3/compatibilite-s3/
Thanks for reporting this @jacobdanovitch! According to the compatibility matrix it should be supported.. Can you share which version of Swift/Open Stack you're running? If it's a recent version, I believe this should be reported as a Swift bug
Yep, my teammate found an open PR in Swift working on this. Unfortunately it doesn't look like it's all the way there yet and even when it's merged I doubt our Openstack provider will be in too much of a rush to upgrade. Is there any workaround possible here? I was thinking that putting a MinIO gateway in front of it might work but (1) I'm not sure if MinIO would have the same issue and (2) it looks like they're deprecating gateways (though maybe it could work as a stop-gap).
Okay, seems like putting a MinIO gateway in front of the Swift S3 endpoint solves this (temporarily). Minimal docker-compose based on lakeFS':
version: '3.7'
services:
postgres:
image: postgres:11
container_name: postgres
environment:
POSTGRES_USER: lakefs
POSTGRES_PASSWORD: lakefs
minio-setup:
image: minio/mc
container_name: minio-setup
environment:
- MC_HOST_lakefs=http://${S3_ACCESS_KEY}:${S3_SECRET_KEY}@minio:9000
depends_on:
- minio
command: ["mb", "lakefs/example"]
minio:
container_name: minio
image: minio/minio:RELEASE.2022-05-26T05-48-41Z
ports:
- 9000:9000
- 9001:9001
command:
- "gateway"
- "s3"
- "${S3_ENDPOINT}"
- "--console-address"
- ":9001"
environment:
AWS_ACCESS_KEY_ID: ${S3_ACCESS_KEY}
AWS_SECRET_ACCESS_KEY: ${S3_SECRET_KEY}
MINIO_ROOT_USER: ${S3_ACCESS_KEY}
MINIO_ROOT_PASSWORD: ${S3_SECRET_KEY}
lakefs:
image: treeverse/lakefs:latest
container_name: lakefs
ports:
- "8001:8000"
depends_on:
- postgres
- minio-setup
environment:
- LAKEFS_BLOCKSTORE_TYPE=s3
- LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE=true
- LAKEFS_BLOCKSTORE_S3_DISCOVER_BUCKET_REGION=false
- LAKEFS_BLOCKSTORE_S3_ENDPOINT=http://minio:9000
- LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID=${S3_ACCESS_KEY}
- LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY=${S3_SECRET_KEY}
- LAKEFS_AUTH_ENCRYPT_SECRET_KEY=${LAKEFS_AUTH_ENCRYPT_KEY}
- LAKEFS_DATABASE_CONNECTION_STRING=postgres://lakefs:lakefs@postgres/postgres?sslmode=disable
- LAKEFS_STATS_ENABLED=false
- LAKEFS_LOGGING_LEVEL
entrypoint: ["/app/wait-for", "postgres:5432", "--", "/app/lakefs", "run"]
lakefs-setup:
image: treeverse/lakefs:latest
container_name: lakefs-setup
depends_on:
- lakefs
environment:
- LAKEFS_BLOCKSTORE_TYPE=s3
- LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE=true
- LAKEFS_BLOCKSTORE_S3_DISCOVER_BUCKET_REGION=false
- LAKEFS_BLOCKSTORE_S3_ENDPOINT=http://minio:9000
- LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID=${S3_ACCESS_KEY}
- LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY=${S3_SECRET_KEY}
- LAKEFS_AUTH_ENCRYPT_SECRET_KEY=${LAKEFS_AUTH_ENCRYPT_KEY}
- LAKEFS_DATABASE_CONNECTION_STRING=postgres://lakefs:lakefs@postgres/postgres?sslmode=disable
- LAKECTL_CREDENTIALS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
- LAKECTL_CREDENTIALS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
- LAKECTL_SERVER_ENDPOINT_URL=http://lakefs:8000
entrypoint: ["/app/wait-for", "postgres:5432", "--", "sh", "-c",
"lakefs setup --user-name docker --access-key-id AKIAIOSFODNN7EXAMPLE --secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY && lakectl repo create lakefs://example s3://example"
]
I've tagged the MinIO version as the Gateway functionality will literally be removed in two days, so this is definitely just a stop-gap solution, but seems to work ok. Not sure of the performance implications either.
I just tested it against new high performance OVH endpoint https://s3.sbg.perf.cloud.ovh.net, without success.
Bellow are the logs of lakefs container:
{"action":"create_repo","file":"build/pkg/api/controller.go:3483","func":"pkg/api.(*Controller).LogAction","host":"127.0.0.1:8000","level":"debug","message_type":"action","method":"POST","msg":"performing API action","path":"/api/v1/repositories","request_id":"3727e6b2-de89-4082-80f4-24ca4a5189a9","service":"api_gateway","service_name":"rest_api","time":"2022-09-16T13:48:16Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Request s3/GetObject Details:\n---[ REQUEST POST-SIGN ]-----------------------------\nGET /test-fadam/test/dummy HTTP/1.1\r\nHost: s3.sbg.perf.cloud.ovh.net\r\nUser-Agent: aws-sdk-go/1.37.26 (go1.17.8; linux; amd64)\r\nAuthorization: AWS4-HMAC-SHA256 Credential=REDACTED/20220916/sbg/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=28affcbe037df32cf114dead0e04590a5e59f1c7009637d16bff1cc6923a5808\r\nX-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\nX-Amz-Date: 20220916T134816Z\r\nAccept-Encoding: gzip\r\n\r\n\n-----------------------------------------------------","sdk":"aws","time":"2022-09-16T13:48:16Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Response s3/GetObject Details:\n---[ RESPONSE ]--------------------------------------\nHTTP/1.1 404 Not Found\r\nTransfer-Encoding: chunked\r\nContent-Type: application/xml\r\nDate: Fri, 16 Sep 2022 13:48:16 GMT\r\nX-Amz-Id-2: txbc461e61360847a8a48cf-0063247ea0\r\nX-Amz-Request-Id: txbc461e61360847a8a48cf-0063247ea0\r\nX-Openstack-Request-Id: txbc461e61360847a8a48cf-0063247ea0\r\nX-Trans-Id: txbc461e61360847a8a48cf-0063247ea0\r\n\r\n\n-----------------------------------------------------","sdk":"aws","time":"2022-09-16T13:48:16Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Validate Response s3/GetObject failed, attempt 0/5, error NoSuchKey: The specified key does not exist.\n\tstatus code: 404, request id: txbc461e61360847a8a48cf-0063247ea0, host id: txbc461e61360847a8a48cf-0063247ea0","sdk":"aws","time":"2022-09-16T13:48:16Z"}
{"error":"s3 error: \u003c?xml version='1.0' encoding='UTF-8'?\u003e\n\u003cError\u003e\u003cCode\u003eInternalError\u003c/Code\u003e\u003cMessage\u003eunexpected status code 500\u003c/Message\u003e\u003cRequestId\u003etx8c1f610d817a4f43a9214-0063247ea0\u003c/RequestId\u003e\u003c/Error\u003e","file":"build/pkg/block/s3/adapter.go:250","func":"pkg/block/s3.(*Adapter).streamToS3","host":"127.0.0.1:8000","level":"error","method":"POST","msg":"bad S3 PutObject response","operation":"PutObject","path":"/api/v1/repositories","request_id":"3727e6b2-de89-4082-80f4-24ca4a5189a9","service_name":"rest_api","status_code":500,"time":"2022-09-16T13:48:16Z","url":"https://s3.sbg.perf.cloud.ovh.net/test-fadam/test/dummy"}
{"error":"s3 error: \u003c?xml version='1.0' encoding='UTF-8'?\u003e\n\u003cError\u003e\u003cCode\u003eInternalError\u003c/Code\u003e\u003cMessage\u003eunexpected status code 500\u003c/Message\u003e\u003cRequestId\u003etx8c1f610d817a4f43a9214-0063247ea0\u003c/RequestId\u003e\u003c/Error\u003e","file":"build/pkg/api/controller.go:1297","func":"pkg/api.(*Controller).CreateRepository","level":"warning","msg":"Could not access storage namespace","reason":"unknown","service":"api_gateway","storage_namespace":"s3://test-fadam/test/","time":"2022-09-16T13:48:16Z"}
{"file":"build/pkg/httputil/tracing.go:149","func":"pkg/httputil.TracingMiddleware.func1.1","host":"127.0.0.1:8000","level":"trace","method":"POST","msg":"HTTP call ended","path":"/api/v1/repositories","request_body":"{\"default_branch\":\"main\",\"name\":\"test-fadam\",\"storage_namespace\":\"s3://test-fadam/test/\"}","request_id":"3727e6b2-de89-4082-80f4-24ca4a5189a9","response_body":"{\"message\":\"failed to create repository: failed to access storage\"}\n","response_headers":{"Content-Type":["application/json"],"X-Request-Id":["3727e6b2-de89-4082-80f4-24ca4a5189a9"]},"sent_bytes":0,"service_name":"rest_api","status_code":400,"time":"2022-09-16T13:48:16Z","took":61526451}
I used this configuration:
logging:
format: "json"
level: "TRACE"
audit_log_level: "TRACE"
blockstore:
type: s3
s3:
region: "sbg"
credentials:
access_key_id: "REDACTED"
secret_access_key: "REDACTED"
endpoint: "https://s3.sbg.perf.cloud.ovh.net"
force_path_style: true
discover_bucket_region: false
Hi @fadam-csgroup ,
That is really disappointing, given that lakeFS should now work on OVH](https://github.com/treeverse/lakeFS/issues/2471#issuecomment-1112956193).
Thanks for sending detailed lakeFS debug logs! Looking at them, it appears that this might actually be another error. When lakeFS tries to put-object its dummy test object, it receives this response (reformatted to be nicer JSON):
{
"file": "build/pkg/logging/aws.go:8",
"func": "pkg/logging.(*AWSAdapter).Log",
"level": "debug",
"msg": "DEBUG: Response s3/GetObject Details:\n---[ RESPONSE ]--------------------------------------\nHTTP/1.1 404 Not Found\r\nTransfer-Encoding: chunked\r\nContent-Type: application/xml\r\nDate: Fri, 16 Sep 2022 13:48:16 GMT\r\nX-Amz-Id-2: txbc461e61360847a8a48cf-0063247ea0\r\nX-Amz-Request-Id: txbc461e61360847a8a48cf-0063247ea0\r\nX-Openstack-Request-Id: txbc461e61360847a8a48cf-0063247ea0\r\nX-Trans-Id: txbc461e61360847a8a48cf-0063247ea0\r\n\r\n\n-----------------------------------------------------",
"sdk": "aws",
"time": "2022-09-16T13:48:16Z"
}
That 404 is really puzzling! So sorry for bringing up two things that you might already have tried:
- Could you verify the bucket name
test-fadam? - Could you try again, without
force_path_style: true? (Perhaps this endpoint only supports host-based addressing for some reason...).
Thanks (and sorry if this is belabouring things you've already done...).
Hi @arielshaqed
In fact, I ordered the high performance OVH endpoint after reading your https://github.com/treeverse/lakeFS/issues/2471#issuecomment-1112956193. Before this, the error was the original "aws-chunked" one.
To me, the 404 is expected: in my understanding, lakefs tries to verify that the repo does not exist by considering the dummy file as a "lock file". Then, it tries to create this dummy "lock file" using the PutObject method, but receive the unexpected 500 error reformated bellow:
{
"error": "s3 error: <?xml version='1.0' encoding='UTF-8'?>\n<Error><Code>InternalError</Code><Message>unexpected status code 500</Message><RequestId>tx8c1f610d817a4f43a9214-0063247ea0</RequestId></Error>",
"file": "build/pkg/block/s3/adapter.go:250",
"func": "pkg/block/s3.(*Adapter).streamToS3",
"host": "127.0.0.1:8000",
"level": "error",
"method": "POST",
"msg": "bad S3 PutObject response",
"operation": "PutObject",
"path": "/api/v1/repositories",
"request_id": "3727e6b2-de89-4082-80f4-24ca4a5189a9",
"service_name": "rest_api",
"status_code": 500,
"time": "2022-09-16T13:48:16Z",
"url": "https://s3.sbg.perf.cloud.ovh.net/test-fadam/test/dummy"
}
- I'm sure of the bucket name: I tried to put/delete object in it using
s3cmdwith success. - Without
force_path_style: true, the response is very similar:
{"action":"create_repo","file":"build/pkg/api/controller.go:3483","func":"pkg/api.(*Controller).LogAction","host":"127.0.0.1:8000","level":"debug","message_type":"action","method":"POST","msg":"performing API action","path":"/api/v1/repositories","request_id":"97e2e150-c198-4be6-82e2-b7e7e38c75ee","service":"api_gateway","service_name":"rest_api","time":"2022-09-19T07:14:48Z"}
{"bucket":"test-fadam","file":"build/pkg/block/s3/client_cache.go:68","func":"pkg/block/s3.(*ClientCache).getBucketRegion","host":"127.0.0.1:8000","level":"debug","method":"POST","msg":"requesting region for bucket","path":"/api/v1/repositories","request_id":"97e2e150-c198-4be6-82e2-b7e7e38c75ee","service_name":"rest_api","time":"2022-09-19T07:14:48Z"}
{"bucket":"test-fadam","file":"build/pkg/block/s3/client_cache.go:83","func":"pkg/block/s3.(*ClientCache).Get","host":"127.0.0.1:8000","level":"debug","method":"POST","msg":"creating client for region","path":"/api/v1/repositories","region":"sbg","request_id":"97e2e150-c198-4be6-82e2-b7e7e38c75ee","service_name":"rest_api","time":"2022-09-19T07:14:48Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Request s3/GetObject Details:\n---[ REQUEST POST-SIGN ]-----------------------------\nGET /test/dummy HTTP/1.1\r\nHost: test-fadam.s3.sbg.perf.cloud.ovh.net\r\nUser-Agent: aws-sdk-go/1.37.26 (go1.17.8; linux; amd64)\r\nAuthorization: AWS4-HMAC-SHA256 Credential=REDACTED/20220919/sbg/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=089498fa30c4037bb60df2b00d15d2aa378e78316ba59996899d75dd48c6c012\r\nX-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\r\nX-Amz-Date: 20220919T071448Z\r\nAccept-Encoding: gzip\r\n\r\n\n-----------------------------------------------------","sdk":"aws","time":"2022-09-19T07:14:48Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Response s3/GetObject Details:\n---[ RESPONSE ]--------------------------------------\nHTTP/1.1 404 Not Found\r\nTransfer-Encoding: chunked\r\nContent-Type: application/xml\r\nDate: Mon, 19 Sep 2022 07:14:48 GMT\r\nX-Amz-Id-2: tx4c7d4a5f28a04be2b2ba8-00632816e8\r\nX-Amz-Request-Id: tx4c7d4a5f28a04be2b2ba8-00632816e8\r\nX-Openstack-Request-Id: tx4c7d4a5f28a04be2b2ba8-00632816e8\r\nX-Trans-Id: tx4c7d4a5f28a04be2b2ba8-00632816e8\r\n\r\n\n-----------------------------------------------------","sdk":"aws","time":"2022-09-19T07:14:48Z"}
{"file":"build/pkg/logging/aws.go:8","func":"pkg/logging.(*AWSAdapter).Log","level":"debug","msg":"DEBUG: Validate Response s3/GetObject failed, attempt 0/5, error NoSuchKey: The specified key does not exist.\n\tstatus code: 404, request id: tx4c7d4a5f28a04be2b2ba8-00632816e8, host id: tx4c7d4a5f28a04be2b2ba8-00632816e8","sdk":"aws","time":"2022-09-19T07:14:48Z"}
{"error":"s3 error: \u003c?xml version='1.0' encoding='UTF-8'?\u003e\n\u003cError\u003e\u003cCode\u003eInternalError\u003c/Code\u003e\u003cMessage\u003eunexpected status code 500\u003c/Message\u003e\u003cRequestId\u003etxdc8783a441d44220b14a9-00632816e8\u003c/RequestId\u003e\u003c/Error\u003e","file":"build/pkg/block/s3/adapter.go:250","func":"pkg/block/s3.(*Adapter).streamToS3","host":"127.0.0.1:8000","level":"error","method":"POST","msg":"bad S3 PutObject response","operation":"PutObject","path":"/api/v1/repositories","request_id":"97e2e150-c198-4be6-82e2-b7e7e38c75ee","service_name":"rest_api","status_code":500,"time":"2022-09-19T07:14:48Z","url":"https://test-fadam.s3.sbg.perf.cloud.ovh.net/test/dummy"}
{"error":"s3 error: \u003c?xml version='1.0' encoding='UTF-8'?\u003e\n\u003cError\u003e\u003cCode\u003eInternalError\u003c/Code\u003e\u003cMessage\u003eunexpected status code 500\u003c/Message\u003e\u003cRequestId\u003etxdc8783a441d44220b14a9-00632816e8\u003c/RequestId\u003e\u003c/Error\u003e","file":"build/pkg/api/controller.go:1297","func":"pkg/api.(*Controller).CreateRepository","level":"warning","msg":"Could not access storage namespace","reason":"unknown","service":"api_gateway","storage_namespace":"s3://test-fadam/test/","time":"2022-09-19T07:14:48Z"}
{"file":"build/pkg/httputil/tracing.go:149","func":"pkg/httputil.TracingMiddleware.func1.1","host":"127.0.0.1:8000","level":"trace","method":"POST","msg":"HTTP call ended","path":"/api/v1/repositories","request_body":"{\"default_branch\":\"main\",\"name\":\"test-fadam\",\"storage_namespace\":\"s3://test-fadam/test/\"}","request_id":"97e2e150-c198-4be6-82e2-b7e7e38c75ee","response_body":"{\"message\":\"failed to create repository: failed to access storage\"}\n","response_headers":{"Content-Type":["application/json"],"X-Request-Id":["97e2e150-c198-4be6-82e2-b7e7e38c75ee"]},"sent_bytes":0,"service_name":"rest_api","status_code":400,"time":"2022-09-19T07:14:48Z","took":409451881}
Hello everyone, The problem has been fixed by OVH since January 3rd, and I could test that LakeFS is now working correctly.
OVH was systematically checking the "Content-Length" header which must not be sent when using Multiple Chunck Upload, hence the error 500 is in play. The problem was related to the mechanism explained on the following AWS page: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
If you include the Transfer-Encoding header and specify any value other than identity, you must omit the Content-Length header.
Thank you @geonux! Can you please confirm this issue could be closed? Or are there any changes still required on the lakeFS side?
Thank you @geonux! Can you please confirm this issue could be closed? Or are there any changes still required on the lakeFS side?
I haven't tried in a little while, but I think this issue still persists with Openstack Swift.
This issue is now marked as stale after 90 days of inactivity, and will be closed soon. To keep it, mark it with the "no stale" label.
The bot can probably close this. @geonux @jacobdanovitch, if this still occurs on your provider. Please do re-open with details of current errors received, we obviously do not know how to advance on this without your info.