devpod
devpod copied to clipboard
Private registry (GCR) - Failed to push
What happened? I'm trying to use a private registry in GCR, and runs my workspace on Minikube.
I already authenticated on Minikube (by addon gcp-auth) and on docker, with docker login, but when the workspace is building, occurred the following error:
...
[20:11:19] info #24 pushing layers
[20:11:20] info Delete build Pod 'devpod-xxx-buildkit'
[20:11:20] info #24 pushing layers 1.8s done
[20:11:20] debug Run command: kubectl --namespace core --kubeconfig /Users/xxx/.kube/config --context minikube delete po devpod-xxxx-buildkit --ignore-not-found --grace-period=10
[20:11:32] debug failed to push gcr.io/xxx/xxx/devpod:devpod-229aa450632bad185d131a1c16af6027: failed to authorize: failed to fetch oauth token: Post "https://gcr.io/v2/token": dial tcp: lookup gcr.io on [fe80::96ea:eaff:fed6:6645%en0]:53: no such host
[20:11:32] debug github.com/moby/buildkit/util/stack.Enable
...
What did you expect to happen instead? The workspace image pushes to our private registry and opens the workspace successfully.
How can we reproduce the bug? (as minimally and precisely as possible)
- Use a private registry;
- Add a kubernetes (minikube) provider;
- Create a workspace with a custom image;
- Try to open the repo on DevPod;
My devcontainer.json:
{
"name": "xxx",
"build": {
"dockerfile": "Dockerfile",
"args": { "VARIANT": "18" }
},
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {},
"ghcr.io/devcontainers/features/docker-outside-of-docker:1": {},
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {},
"ghcr.io/devcontainers-contrib/features/zsh-plugins:0": {
"plugins": "git git-flow F-Sy-H zsh-autosuggestions zsh-completions",
"omzPlugins": "https://github.com/z-shell/F-Sy-H https://github.com/zsh-users/zsh-autosuggestions https://github.com/zsh-users/zsh-completions"
},
"ghcr.io/eitsupi/devcontainer-features/jq-likes:1": {},
"ghcr.io/stuartleeks/dev-container-features/shell-history:0": {},
"ghcr.io/rio/features/skaffold:2": {}
},
"customizations": {
"vscode": {
"extensions": [
"firsttris.vscode-jest-runner",
"snyk-security.snyk-vulnerability-scanner"
]
}
},
"runArgs": [
"-v","/var/run/docker.sock:/var/run/docker.sock",
"--mount", "type=bind,source=${env:HOME}${env:USERPROFILE}/.kube,target=/home/node/.kube-localhost",
"--mount", "type=bind,source=${env:HOME}${env:USERPROFILE}/.config/gcloud,target=/home/node/.gcloud-localhost",
"-e", "SYNC_LOCALHOST_KUBECONFIG=true",
"-e", "SYNC_LOCALHOST_GCLOUD=true",
"--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined"
],
"postCreateCommand": [
"/workspaces/xxx/.devcontainer/startup.sh",
"npm install"
]
}
Local Environment:
- DevPod Version: 0.1.10
- Operating System: mac
- ARCH of the OS: ARM64
DevPod Provider:
- Cloud Provider: google
- Kubernetes Provider: v1.27.2 (minikube v1.30.1)
- Local/remote provider: docker
- Custom provider: provide imported
provider.yamlconfig file
Anything else we need to know? I've run a pod with a custom image directly on k8s and the pull was executed successfully, and when I tried to pull an image on docker, was executed successfully too.
Hi @adriellcardoso , thanks for creating the issue. How is access to your registry set up in your cluster? One way to authenticate would be to connect the registry credentials to a service account and then attach the service account to the devpod pod: https://github.com/loft-sh/devpod/blob/11f4e19da52f53b22aea70d85ab8cc2663672126/providers/kubernetes/provider.yaml#L64C19-L64C19
Hi @pascalbreuninger, thanks for your support. I've tried different ways to solve this problem, but I still face the same error:
[10:21:33] debug failed to push gcr.io/myrepo/path/devpod:devpod-ebe3d230ff093ea364acc7fd11311bee: failed to authorize: failed to fetch oauth token: Post "https://gcr.io/v2/token": dial tcp: lookup gcr.io on [fe80::96ea:eaff:fed6:6645%en0]:53: **no such host**
I tried to create a service account and specify it at the provider, but I can't see in the yaml file of the buildkit image:
apiVersion: v1
kind: Pod
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
creationTimestamp: "2023-06-28T13:25:37Z"
name: devpod-cloud-development-minikube-buildkit
namespace: core
resourceVersion: "34474"
uid: 553d1f5a-a882-487e-880f-945577d478c1
spec:
containers:
- args:
- --oci-worker-no-process-sandbox
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /google-app-creds.json
- name: PROJECT_ID
value: project-test
- name: GCP_PROJECT
value: project-test
- name: GCLOUD_PROJECT
value: project-test
- name: GOOGLE_CLOUD_PROJECT
value: project-test
- name: CLOUDSDK_CORE_PROJECT
value: project-test
image: moby/buildkit:master-rootless
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- buildctl
- debug
- workers
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
name: buildkitd
readinessProbe:
exec:
command:
- buildctl
- debug
- workers
failureThreshold: 3
initialDelaySeconds: 2
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
runAsGroup: 1000
runAsUser: 1000
seccompProfile:
type: Unconfined
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/user/.local/share/buildkit
name: buildkitd
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-646zz
readOnly: true
- mountPath: /google-app-creds.json
name: gcp-creds
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: false
imagePullSecrets:
- name: gcr-creds
- name: gcp-auth
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: buildkitd
- name: kube-api-access-646zz
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- hostPath:
path: /var/lib/minikube/google_application_credentials.json
type: File
name: gcp-creds
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-06-28T13:25:37Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2023-06-28T13:25:37Z"
message: 'containers with unready status: [buildkitd]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2023-06-28T13:25:37Z"
message: 'containers with unready status: [buildkitd]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2023-06-28T13:25:37Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://844f0dc4afb96380eb6b887d697c2472faf9565e0b71215323ec322226981b34
image: moby/buildkit:master-rootless
imageID: docker-pullable://moby/buildkit@sha256:e712cf138062ede1f1c2c7bc03a5787cab2b08d4ceb3328f093583dd130e4e3f
lastState: {}
name: buildkitd
ready: false
restartCount: 0
started: true
state:
running:
startedAt: "2023-06-28T13:25:37Z"
hostIP: 192.168.49.2
phase: Running
podIP: 172.17.0.4
podIPs:
- ip: 172.17.0.4
qosClass: BestEffort
startTime: "2023-06-28T13:25:37Z"
UPDATE: I'm facing a lot of errors like the below for different URLs:
[16:48:41] debug Get "https://mcr.microsoft.com/v2/": dial tcp: lookup mcr.microsoft.com on [fe80::96ea:eaff:fed6:6645%en0]:53: no such host
Could you imagine why I'm facing this problem? There is some way that I can fix this? Tks!
I think this might be releated to other private registry problems, another maintainer @ThomasK33 is currently looking into this. Check out https://github.com/loft-sh/devpod/pull/348 to contribute/discuss the upcoming changes soon-ish