lakeFS
lakeFS copied to clipboard
lakeFS - Data version control for your data lake | Git for data
Since lake is a S3 server, it could be usefull to be compliant to STS endpoint API calls https://docs.aws.amazon.com/STS/latest/APIReference/API_Operations.html I particularly use the AssumeRoleWithWebIdentity with my S3 server (not lakeFS)
Create an infrastructure similar to lakectl tests to test lakefs cmds
Currently the functionality to determine the lakefs db-schema-version (by reading the highest version of the upward migration files #2366) is within the Github workflows that push the lakeFS docker images...
Currently each lakeFS instance collects statistics from the database such is max connections, idle connections etc. The current data is based on SQL statistics which are probably irrelevant in the...
Document the new S3 audit logs - schema, usage and examples
Closes #7479 ## Change Description ### Background Add IfAbsent to support LinkPhysicalAddress failure if path already exists ### Testing Details Added unit test ### Breaking Change? No
1. The import button should be disabled. 2. GC- delete and edit policy buttons should be disabled. 3. The branch protection tab shouldn’t be displayed in the repo options window....
Supporting this, will allow providing the API with the flag and avoiding race conditions where entry already exists
This combination does not work, see #7468: - S3 gateway, performing a... - multipart... - copy operation... - across buckets... - if using sigV2... - with the MinIO client[^*]. That...