Kubernetes self-signed certificate not copied to container
Contributing guidelines
- [X] I've read the contributing guidelines and wholeheartedly agree
I've found a bug and checked that ...
- [X] ... the documentation does not mention anything about my problem
- [X] ... there are no open or closed issues that are related to my problem
Description
When creating a buildx kubernetes builder with a config file for private docker registry, the custom config is mounted to the build pod however certificates are not available. Looking at the deployment 2x configmaps are created, but only 1x is mounted.
Example buildx command:
docker buildx create \
--bootstrap \
--use \
--name kube \
--driver kubernetes \
--config /etc/buildkit/buildkitd.toml \
'--driver-opt="nodeselector=docker/buildkit=","namespace=buildkit"'
Expected behaviour
Custom certificates should be mounted at /etc/buildkit/certs/<registry>/
Actual behaviour
There are no certificates mounted in the pod.
Buildx version
github.com/docker/buildx v0.10.4 c513d34
Docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.10.4
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.17.3
Path: /usr/libexec/docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Version: v0.23.0
Path: /usr/libexec/docker/cli-plugins/docker-scan
Server:
Containers: 233
Running: 166
Paused: 0
Stopped: 67
Images: 780
Server Version: 23.0.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: false
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc version: v1.1.7-0-g860f061
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
Kernel Version: 5.4.0-73-generic
Operating System: Ubuntu 20.04.6 LTS
OSType: linux
Architecture: x86_64
CPUs: 64
Total Memory: 125.7GiB
Name: server-01
ID: 6HQ7:ELB5:WJFJ:2CIK:SCF6:STIS:G57Q:3UNX:AJXY:4BYQ:6JCJ:YR3U
Docker Root Dir: /srv/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
dockerregistry-01:5000
127.0.0.0/8
Live Restore Enabled: false
Builders list
NAME/NODE DRIVER/ENDPOINT STATUS BUILDKIT PLATFORMS
kube * kubernetes
kube0 kubernetes:///kube?deployment=&kubeconfig= running v0.11.6 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
default docker
default default running 23.0.6 linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386
Configuration
Config /etc/buildkit/buildkitd.toml:
debug = true
[registry."dockerregistry-01"]
ca=["/etc/certs/cluster-signer.crt"]
[[registry."dockerregistry-01".keypair]]
key="/etc/certs/dockerregistry-01.key"
cert="/etc/certs/dockerregistry-01.crt"
Build logs
No response
Additional info
Lines missing in deployment:
spec:
template:
spec:
volumes:
- name: config-1
configMap:
name: kube0-config-1
containers:
- name: buildkitd
volumeMounts:
- name: config-1
mountPath: /etc/buildkit/certs/dockerregistry-01
@CptLemming I am facing the same issues with this, not being able to add my own self-signed certificate. Did you ever find a workaround?
I'm afraid the only workaround was to edit the deployment once it's in k8 and manually add the missing config