buildkit
buildkit copied to clipboard
Can't use directories as --mount=type=secret
Currently, directories can't be mounted as secrets using --mount=type=secret. This would be really helpful for mounting Google Cloud credentials. Current error:
rpc error: code = Unknown desc = read /Users/oliviercorradi/.config/gcloud: is a directory
I might be able to contribute if someone could point me to the right place?
Exact same issue here. Currently, the only way I can get gcloud credentials mounted into the container build context is to write a gnarly wrapper script that tars the ~/.config/gcloud directory and injects it into the container and then extracts it as part of the RUN step while being careful to clean it up afterwards (i.e. in the same RUN step). It would be a lot smoother if --mount=type=secret could just point at a directory.
If anyone has a workaround or a better option, please share!
Here's a workaround (if using Docker build with the experimental buildkit support):
tar_gcloud_config() {
tar zc -C ~/.config/gcloud .
}
export DOCKER_BUILDKIT=1
docker build \
-t example \
--secret id=gcloud-config,src=<(tar_gcloud_config) \
.
Then in the container, mount the secret in some temporary directory, extract it to ~/.config/gcloud, and then be sure to remove it before the end of the RUN step:
RUN \
--mount=type=secret,id=gcloud-config,dst=/root/tmp/gcloud-config.tar.gz \
mkdir -p -m 0600 ~/.config/gcloud && \
tar zxf /root/tmp/gcloud-config.tar.gz -C ~/.config/gcloud && \
gsutil cp gs://whatever/example . && \
rm -rf ~/.config/gcloud
(Where the gsutil cp command is just an example, while in practice it's usually something like build.sh instead)
However, it appears that there's a (seemingly arbitrary?) limit of 500kb to the size of the secret, so if there's any extra stuff in your gcloud config, special care has to be taken to scrub it and include only what's needed in the tar_gcloud_config example above.
I suspect that this isn't how buildkit secrets are meant to be used, but I don't see any other reasonable way to use gcloud within a Docker-based build other than either creating a one-off service account to inject into the container as a build secret, or refactoring things to do any gcloud/gsutil steps outside of the container build instead.
It looks like there buildkit also supports bind mounts, but (1) it's not clear whether they're safe to use for passing in sensitive information like gcloud credentials, and (2) it doesn't appear to be usable from the docker build buildkit integration.
(For context, the intent in my particular use case is for our development team to be able to locally build development containers to be run in Minikube/GKE, while the containers depend on private artifacts in Google Cloud Storage and/or Google Container Registry. I'm not sure how much this use case overlaps with OP's use case, but the point either way is that we're attempting to inherit gcloud credentials within a container build)
So in other words:
- 0: Is it reasonable to try to mount gcloud credentials into a container build context?
- 1: Are bind mounts suitable for passing sensitive information such as gcloud configuration/credentials?
- 2: If bind mounts are in fact safe to use for passing in gcloud credentials, is there a way to do it through
docker build, or any concrete plans to support such functionality at some point? (OTOH perhaps we should be using buildkit directly) - 3: Is there some other viable approach for getting gcloud credentials into the build container?
Thanks.
Seems like this file would be the place to start? https://github.com/moby/buildkit/blob/master/session/secrets/secretsprovider/file.go
This doesn't solve the general problem with not being able to use directories, but for gcloud users - keep in mind that for some applications (such as google-artifactregistry-auth) you only need the application_default_credentials.json file to be available, not the entire folder.
This is also as issue for AWS users who are using SSO as you can't predict the name of the file that AWS uses to store the cached credentials. Although there are OK workarounds like using aws configure export-credentials to put the credentials into a file.