buildx
buildx copied to clipboard
feature request(kubernetes driver): reference kubeconfig file in driver opts
If you've got multiple kubernetes builders that have kubeconfigs in different files, there's no real way to run docker build ls
and have it work. Colon-separated KUBECONFIG
is currently broken, but even then that doesn't seem like a great solution: I'm not trying to stack configs here, I'm trying to reference two separate ones.
Proposal 1: new driver opt kubeconfig-file
to point at a kubeconfig file
I don't know the codebase, but seems like it would be pretty simple.
The one downside IMO is that this necessarily dynamic. The kubeconfig file has a context that might change and point at a different cluster entirely, depending on the usage. In some cases, this might be desired, but generally, I doubt it.
Proposal 2: Somehow support importing a whole kubeconfig file
Alternative: support all (relevant) kubeconfig fields as driver opts. This seems hard, since driver opts are key-value, and the kubeconfig format is sprawling nested structs and lists.
Alternative: This is a little gross and unwieldy, but we could have an opt like kubeconfig-content
that is the base64-encoded content of a whole kubeconfig file.
Alternative: Again, I don't know the codebase, but it looks as if the buildx create
etc. commands just fiddle with config files, so I suppose an alternative would be to have a new place to store kubeconfig files.
Thoughts?
Hey @AkihiroSuda,
I think buildx could not read my $KUBECONFIG
env variable properly:
$ echo $KUBECONFIG
/var/folders/nq/vxjjn3311fg4q263qsxrghpcpzgp66/T/kubie-configOQma3r.yaml
Additional to this, is it make sense to add an --kubeconfig
flag into build
command?
$ docker buildx build -t test:simple-golang-app -f Dockerfile .
[+] Building 0.0s (0/0)
error: no valid drivers found: cannot determine Kubernetes namespace, specify manually: stat :/Users/furkan.turkal/.kube/config:/Users/furkan.turkal/.kube/foo:/Users/furkan.turkal/.kube/bar: no such file or directory
As error already says, I think buildx currently does not support stacked KUBECONFIG.
Wdyt?
$ cat ~/.docker/buildx/instances/youthful_meninsky
{"Name":"youthful_meninsky","Driver":"kubernetes","Nodes":[{"Name":"youthful_meninsky0","Endpoint":"kubernetes:///?deployment=\u0026kubeconfig=/Users/furkan.turkal/.kube/config:/Users/furkan.turkal/.kube/foo:/Users/furkan.turkal/.kube/bar","Platforms":null,"Flags":null,"ConfigFile":"","DriverOpts":{"replicas":"3"}}],"Dynamic":false}
We might check this file to ensure whether it's a valid endpoint to read.
@Dentrax
once you create builder, the KUBECONFIG will be stored in the Endpoint
so you could recreate it by KUBECONFIG=~/.kube/config-xxx.yaml docker buildx create
.
this lock here just want to avoid to use wrong k8s cluster of some builder.
@nickatsegment
I think the feature already supported with
KUBECONFIG=~/.kube/config-xxx.yaml docker buildx create
(only need when buildx create)
could you help check and close this issue ?
I have similar problems with using env KUBECONFIG in Jenkins CI/CD pipeline. I created secret with kubeconfig data, and try to use it in stage as a variable. First run everything fine, but on second and subsequent build fails My stage looks this:
stage('Build Docker Image portal-notary') {
steps {
timeout(time: 5, unit: 'MINUTES') {
withCredentials([kubeconfigFile(credentialsId: 'k8s-kubeconfig-dev-env-docker-buildx', variable: 'KUBECONFIG')]) {
sh 'docker buildx use portal-cluster-$NODE_NAME || docker buildx create --name portal-cluster-$NODE_NAME --driver kubernetes --driver-opt replicas=2,namespace=docker-buildx,loadbalance=sticky --use'
sh 'docker buildx build -t nexus.domain.com/$JOB_NAME:$BUILD_NUMBER -o type=registry --push --progress=plain .'
}
}
}
}
Error is:
error: no valid drivers found: cannot determine Kubernetes namespace, specify manually: stat /var/lib/jenkins/workspace/portal-notary@tmp/secretFiles/731a5e20-7a2b-4781-8b26-5b8e77fc16a6/kubeconfig: no such file or directory
Despite buildx show error, kubectl can do any query without problem.
It seams that buildx cant use env KUBECONFIG. (P.S. kubeconfig have one user, and one context only) So in my case I went with creating kubeconfig file on filesystem and use it from there
stage('Build Docker Image portal-notary') {
steps {
timeout(time: 5, unit: 'MINUTES') {
withCredentials([kubeconfigFile(credentialsId: 'k8s-kubeconfig-dev-env-docker-buildx', variable: 'KUBECONFIG')]) {
sh 'kubectl config view --flatten > kubeconfig'
sh 'KUBECONFIG=./kubeconfig docker buildx use portal-cluster-$NODE_NAME || KUBECONFIG=./kubeconfig docker buildx create --name portal-cluster-$NODE_NAME --driver kubernetes --driver-opt replicas=2,namespace=docker-buildx,loadbalance=sticky --use'
sh 'KUBECONFIG=./kubeconfig docker buildx ls'
sh 'KUBECONFIG=./kubeconfig docker buildx build --memory 1GB -t nexus.domain.com/$JOB_NAME:$BUILD_NUMBER -o type=registry --push --progress=plain .'
}
}
}
}
For me its strange, that we must announce kubeconfig file in this strict way.
Previously a buildx builder created with KUBECONFIG=<path to kubeconfig> docker buildx create --name remote --use --bootstrap
would work fine when doing a docker buildx ls
and BUILDX_BUILDER=remote docker buildx build .
Now the behavior has changed and KUBECONFIG
has to be explicitly set, I think it used to read the KUBECONFIG from the saved data
Just a note that KUBE_CONTEXT
does not appear to work properly.