lakeFS
lakeFS copied to clipboard
Add design to provide lakeFSFS with temporary credentials from lakeFS
Note this comment.
Thanks, that's an interesting approach! I've been thinking about this and I feel that perhaps the read path actually is more complex than the write path.
The 3 main things that:
- User permissions: Once the role is assumed by the client application, which permissions are granted? i.e. If I identify as lakeFS user A and request an STS token token, how will the attached policy enforce I can only read what A can read? This is also a problem with the current s3a implementation but at least we can tell admins to specific credentials only for given repositories - from what I understand, the STS assumed role will have access to all repositories..?
- What happens with data that is external to the storage namespace (for example, imported data)? Will the assumed role have read access to that as well? Will it have the same privileges as the role assumed by the lakeFS server? If so, that's a pretty big privilege escalation..
- Not sure this is a complication but something worth looking into: I'm not sure it's easy to support e.g. GCP with this approach. I think all cloud providers (and even MinIO) support something similar to STS, but it looks like the ergonomics for each is pretty different.
We're going with presigned URLs now!